Where's the encryption key stored in Jenkins? - encryption

I am trying to migrate the credentials from one Jenkins to another but usernames/passwords are hashed in ${JENKINS_HOME}/credentials.xml
I found this answer, but the problem is it doesn't explain where would someone find the encryption key in order to successfully migrate credentials.
Any help is greatly appreciated!
EDIT: More information.. my ${JENKINS_HOME} is on a separate volume which I detach and re-attach onto the new VM, and it still doesn't work with me.

I found this analysis (link is dead as of June 2020, archived here) very helpful. In a nutshell:
Jenkins uses the master.key to encrypt the key hudson.util.Secret.
This key is then used to encrypt the password in credentials.xml.
When I need to bootstrap new Jenkins instances with some default passwords, I use a template directory tree that contains
secrets/hudson.util.Secret and
secrets/master.key
This works fine.

Regarding JENKINS migration, I recently experienced this situation and after few testings, my workaround worked for me.
Here is what I did:
I moved below files and folders from Source Jenkins to target:
$JENKINS_HOME/secret.key
$JENKINS_HOME/secrets
$JENKINS-HOME/users
$JENKINS_HOME/credentials.xml
Please note: These files are not required to move:
$JENKINS_HOME/identity.key.enc
$JENKINS_HOME/secrets/org.jenkinsci.main.modules.instance_identity.InstanceIdentity.KEY
otherwise you will see below error after starting Jenkins:
java.lang.AssertionError: InstanceIdentity is missing its singleton
Jenkins will automatically generate those two files. Once started, you should be good.

Related

Binary provider has no content

I had to take over responsiblity over Artifactory suddently (responsible employee left), I've never worked with it before, I've spent the day trying to learn the product and figure things out.
Problem Context:
Artifactory deployed on VM in Azure (ubuntu), mounted disk has artifactory deployed on it (OSS 6.0.2 rev 60002900)
Disk got full = application crashed.
I increased disk size, repartioned and re-mounted and the artifactory came up again - but now getting the following error message in the browser:
{
"errors" : [ {
"status" : 500,
"message" : "Could not process download request: Binary provider has no content for 'b8135c33f045ab2cf45ae4d256148d66373a0c89'"
} ]
}
I have searched a bit and found various possible solutions.
This one:Artifactory has lost track of local artifacts
which seems to be the most promesing since the context of our issue is similar, but I don't see those paths - i.e. I do see the filestore and everything in it, but not other paths/files mentioned in the conversation.
I also found this: https://www.jfrog.com/jira/browse/RTFACT-6324 but again not finding the paths in our deployment.
To the best of my understanding it seems that if I somehow "reinstall" the filestore and/or database things should work?
Is there a clear manual, or something basic I'm missing? I'd like to avoid having to isntall everything from scratch.
Any help would be most appreciated as our entire Dev org is not sort of blocked and trying to workaround locally somehow until this is resolved.
I am a JFrog Support Engineer and we saw your issue, we will contact you on other channels in order to help you resolve this issue.
Edit:
After reaching out, we found that this issue was caused by a specific file that was corrupted/missing from your filestore, and after deleting this file and re-pulling it the issue was solved.
To further elaborate on this issue and what can cause it:
Artifactory implements a checksum based storage. All files deployed/cached in Artifactory are renamed to their checksum value and saved in the filestore, and Artifactory creates a pointer in the DataBase containing the name, checksum and some other properties of the file. This allows for a more efficient storage since all files are only saved once in the filestore but can have multiple pointers in the DataBase (in various locations inside Artifactory - different repositories or archives even).
When a file gets corrupted in the filestore or even deleted (without deleting it from Artifactory), this issue can manifest, since there is still a pointer to this file in Artifactory's DataBase but the binary itself does not exist in the filestore.
This issue can be caused by various causes such as connection issues with the NFS/S3/other types of storage, files being corrupted or deleted from the filestore, etc.
Another edit:
You can also use a user plugin called "filestoreIntegrity" that can go through all the pointers to files in your DataBase and checks if they exist in the filestore. This way you can know if there are corrupted or missing files and fix this issue.

How do I completely remove Exceptionless from a server?

I recently installed Exceptionless on a server for testing purposes. Now how do I go about completely removing it from said server? I have uninstalled and deleted all the directories for both ElasticSearch and Exceptionless. When I reinstall it again, it says my email already exists. I basically need to completely remove the tool from my system, so that I can try different configurations. Any help would be greatly appreciated.
I work on the Exceptionless Project. You just need to remove the website and Elasticsearch instance that you configured (if you only set it up for Exceptionless). If you delete the Elasticsearch folder you should be completely free. If you are using redis, you may want to flush the redis db to start over. Were you able to get it working?

How do I delete wp-config from github recursively?

So I am a noob. Looking back I don't know what I was thinking. But I just realized I have uploaded my wp-config file for WordPress to GitHub. Which means my access keys and database login is out for the world to see. In the short term I have converted the repository to private. But I need to figure out how to remove the file from all of the repositories commits. I found this, https://help.github.com/articles/remove-sensitive-data/ but I am afraid that I don't quite understand it and I am not sure how to use it. I have Git Shell but I have only really used the GitHub software. Can anyone walk me through the steps to take? Or am I better off deleting the entire repository and starting over?
Even if you converted it to private, it was online for a while. Check their red mean danger text:
Danger: Once you have pushed a commit to GitHub, you should consider
any data it contains to be compromised. If you committed a password,
change it!
Change the password, then try this repo cleaner:
https://rtyley.github.io/bfg-repo-cleaner/
You'll need java. If you consider it too much work just delete and recreate the repo, but change the exposed password anyway.

Writing an appspec.yml File for Deployment from S3 (and/or Bit Bucket) to AWS CodeDeploy

I'd like to make it so that a commit to our BitBucket repo (or S3 Bucket) automatically deploys code (using CodeDeploy) to our EC2 instances. I'm not clear what to use for the 'source' and 'destination' entry under the 'files' section in the appspec.yml file and also I am not cleared what to mention in BeforeInstall and AfterInstall under 'Hooks' section. I've found some examples on Google and AWs documentation but I am confused what to mention in above fields. The more I am exploring more I am getting confused.
Consider I am new to AWS Code Deploy.
Also it will be very helpful if someone can provide me step y step link how to configure and how to automate the CodeDeploy.
I was wondering if someone could help me out?
Thanks in advance for your help!
Thanks for using CodeDeploy. For new users, I'd like to recommend the following things to do:
Try to run First Run Wizard on console, it will should you the general process how the deployment goes. It also provide a default deployment bundle, also an appspec file included.
Once you want to try a deployment yourself, the Get Started doc is a great place to help you with some pre-requiste settings like IAM role
Then probably try some tutorials for a sample app too, which gives you some idea about deployment groups, deployment configuration, revision and so on.
The next step should be create a bundle for your own use cases, Appspec file doc would be a great place to refer. And for your concerns about BeforeInstall and AfterInstall, if your application doesn't need to do anything, the lifecycle events can be left as empty. BeforeInstall can be used to for for preinstall tasks, such as decrypting files and creating a backup of the current version, while AfterInstall can be used for tasks such as configuring your application or changing file permissions.
Now it comes to the fun part! This blog talks about details about how to integrate with Github(similar for Bitbucket). It's a little long, but really useful, and it also includes how to do automatically deployment once there is a new pushed commit. Currently Jenkins and CodePipline are really popular for auto-triggered deplyoments, but there are always a lot of other ways can achieve the same purpose like Lamda and so on

are connection strings safe in config.json

I am starting to play around with MVC 6 and I am wondering, with the new config.json structure... are my connection strings safe in the config.json file?
Also, I was watching a tutorial video and I saw the person only put their connection strings in their config.dev.json file, not just the config.json. This will mean the application will not have the connection strings while on the production side, correct? He must have meant to put them in both.
Thanks a lot for the help!
I think the Working with Multiple Environments document sums it up pretty well.
Basically, you can farm secret settings such as connection strings out into different files. These files would then be ignored by your source control system and every developer will have to manually create the file on their system (it might help to add some documentation on how to setup a project from a fresh clone of SCC).
For production, the compile will include the production settings. Typically, these are provided by a build server where they are locked away from developers. I'm not sure if that is totally automatic with MVC core or you have to add some kind of build step to do it, but that is how it is normally done.
If you are worried about storing connection strings in the production environment securely, you can extend the framework with your own configuration provider.

Resources