How to sync the deleted artifacts from local repo to remote repository in artifactory? - artifactory

I have two artifactory instances. Let's assume the first one is ABC and the other is XYZ.
On ABC(abc.arti.com), I have a local repository and on XYZ(xyz.arti.com), I have the remote repo which is pointing to the local repo in ABC.
If the client sends the "download artifact" request to xyz.arti.com, it checks the cache in the XYZ, if it is not available and goes to ABC and saves the artifacts in the XYZ and then sent it to the client. Now, If I delete the artifact from the ABC, the same artIfact is not getting deleted in the XYZ. The same artifact is still available on the XYZ after deleting it on the local repo.
How to sync the deleted artifacts on both the repositories?

Please be informed that Artifactory smart remote repositories provide an option to identify the absence of the artifacts from the actual source.
Reference: Source absence detection. This would be overridden if a live deletion sync is happening.
As the artifacts are locally cached in the remote instance (Let's say XYZ as you have described above), the artifacts would remain intact until a manual Zap Cache being executed [or] until the expiry of the Metadata Retrieval Cache Period.

Related

Git Remote Updates finder in Java

I have a requirement to find if there are any updates in Git Remote branch compared to the local branch cloned earlier. If there are any updates available, application must notify the user and on user consent, a pull request has to be performed in Java.
I tried using JGIT
org.eclipse.jgit.lib.BranchTrackingStatus.of(git.getRepository(), git.branchList().call().get(0).getName()).getBehindCount() to know if my local repository is behind the remote repository. This is always returning 0.
1st parameter to the function BranchTrackingStatus.of must be a Repository object, and the object passed is local repository object.
Appreciate any suggestions to tackle this scenario.
BranchTrackingStatus compares a local branch with the configured remote tracking branch, e.g. refs/heads/foo with refs/remotes/origin/foo (if the default naming convention is followed).
The method does not update the remote tracking branch. Unless you first fetch the branch in question from the remote repository, no change will be reported.
FetchResult result = git.fetch()
.setRemote("origin")
.setRefSpecs("refs/heads/foo:refs/remotes/origin/foo")
.call().
The result holds details of the succeeded operation or why it couldn't be completed.
Does that solve your problem?

How backup/restore corda node?

Once Corda node failed, what is the appropriate process of recovery? Corda transactions are shared only with qualified nodes for specific business network, not with every nodes. Therefore, when recovering failure node, copying data from other node would not work well, recovering from backup is required. However, backup image is not completely same for other correct nodes, I would like to know how recover consistency of corda node.
Node data storage
A Corda node stores its vital information as follows:
The node's data is stored in a standard SQL database
By default, in an H2 database file called persistence.mv
The node's keys and certificates are stored in Java keystores in the certificates folder
Recovery from node crashes
If the node crashes:
The database and the contents of the certificates folder will not be affected
In-flight flows can be restarted from their most recent checkpoint
Artemis messages can be replayed
In other words, you can generate a new node, re-add the persistence.mv file, certificates folder and CorDapps, and the node will behave as if nothing happened when you start it up again.
Recovery from corruption/deletion of the node's files
Loss/corruption of data is non-fatal as long as you are able to recover:
The node's database
The contents of the node's certificates folder
It is the responsibility of the node's owner to ensure they protect and back-up these files using standard business procedures. If both of these can be recovered and re-added to a new node, the node should spin up as usual.
If the contents of the node's certificates folder cannot be recovered, you will no longer have your private key, and will not be able to spend your assets on the ledger.
If certain pieces of data cannot be recovered from the node's database, the node could attempt to re-request this data from other nodes where applicable (e.g. the transaction history). However, there is no way to force the counterparties to share this information.

Azure resource groups not deleting

I am having an issue when trying to delete a resource group as I get the following error in Azure;
Failed to delete resource group Default-Storage-EastUS: Deletion of resource
group 'Default-Storage-EastUS' failed as resources with identifiers 'Microsoft.ClassicStorage/storageAccounts/bitnamieastusq5n61m4' could not be deleted. The provisioning state of the resource group will be rolled back. The tracking Id is '5b0424e2-bfea-4aef-a832-2230fb3bd279'. Please check audit logs for more details. (Code: ResourceGroupDeletionBlocked) Unable to delete storage account 'bitnamieastusq5n61m4': 'Storage account bitnamieastusq5n61m4 has some active image(s) and/or disk(s), e.g. bitnami-bitnami-wordpress-4.6.1-0-eastus-Q5N61m4. Ensure these image(s) and/or disk(s) are removed before deleting this storage account.'. (Code: StorageAccountOperationFailed)
This was initially a Automated WordPress install from BITNAMI and linked to our pay as you go subscription.
On the BITNAMI account the VM has been removed completely, however it is still showing on AZURE.
Bitnami/Azure resource screen shot
Under Azure Portal, I have checked the Virtual Machines list and there is nothing present.
I have also checked for any disks that may have not been removed correctly, but again there are none.
The delete process is:
Select Resource
choose the eclipse
select delete
enter the resource group name
Click Delete
Notifications show that it does start the deleting process, but then fails with the above error.
Has anyone come across this before, or have any suggestions on how to remove this resource completely?
I have also looked under the storage account on the portal and it shows bitnamieastusq5n61m4, however it will not delete either apparently due to existing disks, but where are these disks?????
The Portal does not show any images or disks....
No VM Images
Thanks for your time and assistance.
Azure says you cannot delete the storage account if it contains images/active vhd(s). In your case it is the same. So, before deleting the resource group you need to delete the image/active vhd(s).
Refer to the screenshot from - https://azure.microsoft.com/en-in/documentation/articles/storage-create-storage-account/
I would recommend you to use powershell command with force parameter for the same.
Remove-AzureRmResourceGroup -Name "abc" -force

Instance creation in Openstack Nova - Logfile

I need to keep track of Instance creation in openstack Nova.
That is I need to perform some special operations on creation of new instance in openstack.
So that I need to know where all the details are getting stored (In Log file).
Please some one guide me regarding the Log file for tracking instance creation or some other way to track the same.
As I am aware you have to look in the following services' log files
nova-scheduler (oftenly installed on controller node). This will show which 'server' will host the newly created Virtual Machine.
The logs of nova-compute service running on the host that the Virtual Machine was instantiated.
You can additionally check the logs of qemu and libvirt (again on the host that the Virtual Machine was instantiated)
Have in mind that the info you will find there, depends on the 'logging level' you have set in each service configuration files. For more information about how you can configure the OpenStack Components logging refer to the official documentation "Logging and Monitoring".

Deal with Jenkins password encryption when stored in a SCM

I'm currently storing the Jenkins home directory in a Git repository.
The Jenkins configuration has been initialized on a machine A: security settings (authenticated LDAP), global settings, SCM crendentials, etc...
When this Jenkins home is cloned from the Git repository on a machine B, all passwords are encrypted. And unfortunately the Jenkins master that is running on machine B can't read these encrypted passwords.
Moreover, as soon as the configuration is saved, all passwords get re-encrypted, so it seems useless to edit the configuration files manually and put passwords in plain text.
Does anyone has any idea? Thanks!
Got it! Here is the result of my research.
My initial JENKINS_HOME/.gitignore file was as follows:
# Miscellaneous Jenkins litter
*.log
*.tmp
*.old
*.json
# Generated Jenkins state
/.owner
/queue.xml
/fingerprints/
/shelvedProjects/
/updates/
/logs/
# Credentials
/secrets/
secret.key
# Job state
builds/
workspace/
modules/
lastStable
lastSuccessful
nextBuildNumber
But, taken from http://xn--thibaud-dya.fr/jenkins_credentials.html, Jenkins uses a JENKINS_HOME/secrets/master.key to encrypt all passwords, whether in the global settings or in SCM crendentials.
This made me think that the same master.key file was used to decrypt passwords.
So I've tried to remove all credential-related entries from the .gitignore file, thus allowing to push them in my Git repo and then pull them on the machine B (another brand new Jenkins master) and... it works well! All passwords are stored encrypted and since all masters share the same master.key file, all passwords can be decrypted.
Hope it can help someone else!

Resources