I have been able to successfully install and start the cloudera CDH5 server and manager and all the core projects along with that, viz. HDFS, HUE, HIVE etc. However recently I deleted the temporary hdfs directory (/dfs/*) and then formatted the namenode due to certain issues. Now I find all new sorts of issues which I am not able solve.
Some are given as below:
The problem with hue,
The problem with HDFS,
Any help would highly be appreciated.
Thanking in advance.
Edit: I have tried creating all those missing directories in both HDFS as well as local FS and have tried various owners for them to without success.
What helped me, being the most easy solution, is deleting all those services from the cloudera web manager and readding them back.
Related
First of all, I realize similar questions have been asked but none of them seem to have the same problem and I can't find a solution.
I can create tables and do write/read operations perfectly well within python accessing my SQLlite database. However, when trying to access the database through dbeaver I get the following issues:
First, when trying to connect to the db file, it asks me "A file named database.db already exists. Do you want to replace it?"
When trying to look at the tables via GUI it loads for a couple of seconds before showing an error
I have not found a way to solve this issue. Has anyone experience with this and a solution?
EDIT: I want to add what sqllite has to say about the given error: https://www.sqlite.org/rescode.html#busy
It states that the error occurs "because of concurrent activity by some other database connection". I don't know where this concurrent activity would come form though, as I'm closing everything and I'm just trying to look at the tables in the GUI. I think the issue has something to do with the first issue where it asks me if I want to replace the file.
Based on the previous comments, uninstall DBeaver snap
snap remove dbeaver-ce
and install using the .deb package from the official site
wget https://dbeaver.io/files/dbeaver-ce_latest_amd64.deb
sudo apt install ./dbeaver-ce_latest_amd64.deb
this works for me.
all the credits to the previous comments =)
TLTR: If your database file is located in a mountable filesystem, you need to give dbeaver permission to read files from a mountable filesystem.
I have found 2 ways to solve this issue on ubuntu:
1: Make sure your database file is in home directory. Since dbeaver has permission to access your home directory, it will work.
OR
2: If you have downloaded dbeaver from:
ubuntu software center directly or
from the terminal using snap install
and your database file is located in a mountable filesystem, heard over to ubuntu software center => installed, find dbeaver in the list then click on it, on the next window top left, click on the permissions and toggle Read system mount information and disk quotas, put in your password in the authentication prompt and you're good to go.
We are running a local installation of Artifactory Pro which contains around 1M artifacts. Recently, we tried to migrate from the embedded Derby DB to Postgres and switched back to Derby because of errors occurring during the migration.
After that, users reported missing files, mostly maven-metadata.xml but also at least one pom.xml. The files are missing on the filesystem.
The only way I can think of is to query the Artifactory API for all files, try to download them and check if they can be downloaded. Is there a better way to check all artifacts in Artifactory if they exist on the filesystem?
Welcome, Thomas! 👋🏻
Although that kind of errors don't happen in normal operation, data migration back and forth of a large number of artifacts can lead to those problems sometimes.
We have a user plugin find them, so check it out, looks like it is exactly what you need.
I'd like to make it so that a commit to our BitBucket repo (or S3 Bucket) automatically deploys code (using CodeDeploy) to our EC2 instances. I'm not clear what to use for the 'source' and 'destination' entry under the 'files' section in the appspec.yml file and also I am not cleared what to mention in BeforeInstall and AfterInstall under 'Hooks' section. I've found some examples on Google and AWs documentation but I am confused what to mention in above fields. The more I am exploring more I am getting confused.
Consider I am new to AWS Code Deploy.
Also it will be very helpful if someone can provide me step y step link how to configure and how to automate the CodeDeploy.
I was wondering if someone could help me out?
Thanks in advance for your help!
Thanks for using CodeDeploy. For new users, I'd like to recommend the following things to do:
Try to run First Run Wizard on console, it will should you the general process how the deployment goes. It also provide a default deployment bundle, also an appspec file included.
Once you want to try a deployment yourself, the Get Started doc is a great place to help you with some pre-requiste settings like IAM role
Then probably try some tutorials for a sample app too, which gives you some idea about deployment groups, deployment configuration, revision and so on.
The next step should be create a bundle for your own use cases, Appspec file doc would be a great place to refer. And for your concerns about BeforeInstall and AfterInstall, if your application doesn't need to do anything, the lifecycle events can be left as empty. BeforeInstall can be used to for for preinstall tasks, such as decrypting files and creating a backup of the current version, while AfterInstall can be used for tasks such as configuring your application or changing file permissions.
Now it comes to the fun part! This blog talks about details about how to integrate with Github(similar for Bitbucket). It's a little long, but really useful, and it also includes how to do automatically deployment once there is a new pushed commit. Currently Jenkins and CodePipline are really popular for auto-triggered deplyoments, but there are always a lot of other ways can achieve the same purpose like Lamda and so on
I am trying to migrate the credentials from one Jenkins to another but usernames/passwords are hashed in ${JENKINS_HOME}/credentials.xml
I found this answer, but the problem is it doesn't explain where would someone find the encryption key in order to successfully migrate credentials.
Any help is greatly appreciated!
EDIT: More information.. my ${JENKINS_HOME} is on a separate volume which I detach and re-attach onto the new VM, and it still doesn't work with me.
I found this analysis (link is dead as of June 2020, archived here) very helpful. In a nutshell:
Jenkins uses the master.key to encrypt the key hudson.util.Secret.
This key is then used to encrypt the password in credentials.xml.
When I need to bootstrap new Jenkins instances with some default passwords, I use a template directory tree that contains
secrets/hudson.util.Secret and
secrets/master.key
This works fine.
Regarding JENKINS migration, I recently experienced this situation and after few testings, my workaround worked for me.
Here is what I did:
I moved below files and folders from Source Jenkins to target:
$JENKINS_HOME/secret.key
$JENKINS_HOME/secrets
$JENKINS-HOME/users
$JENKINS_HOME/credentials.xml
Please note: These files are not required to move:
$JENKINS_HOME/identity.key.enc
$JENKINS_HOME/secrets/org.jenkinsci.main.modules.instance_identity.InstanceIdentity.KEY
otherwise you will see below error after starting Jenkins:
java.lang.AssertionError: InstanceIdentity is missing its singleton
Jenkins will automatically generate those two files. Once started, you should be good.
So I wrote this script that basically creates a sql dump of the the drupal databases as well as created a tar of of the www directory. I took this off the server and put it on my local machine. I want to take these backup files and test to see if the backup is stable as well as to learn the process.
My problem is that I can't find any clear instructions on how I would be able to do this. Can anyone give me a hand?
Any help is much appreciated.
You need to have a LAMP stack installed on your local machine. In addition, you'll need to modify the settings.php file to change the database connection strings to match your local enviornment. Youi may also need to modify the $base_url variable in the settings.php.
THis would not be necessary if you were simply restoring, but since you're moving the install it is required.