SBT java.io.IOException: Not in GZIP format - sbt

A Windows 7 Pro machine was shut down improperly. When the machine came back up, running 'sbt test' resulted in this error:
[error] {file:/C:/Jenkins/jobs/job1/workspace/}default-2990ce/copy-resources: Error wrapping InputStream in GZIPInputStream: java.io.IOException: Not in GZIP format
How can this be fixed?

I just had this issue after a computer crash and sbt clean didn't help.
Make sure you really find and remove all target directories:
rm -rf `find . -name target`

Related

dropbear-initramfs can't find modprobe and blkid

I'm trying to have dropbear remote ssh boot on a debian system which is encrypted with lvm crypto luks.
I can get it work on my raspberry pi but not on my regular system
steps:
1 install debian with encrypted LVM
ls -l /lib/modules/ |awk -F" " '{print $9}'
mkinitramfs -o /boot/initramfs.gz
echo initramfs initramfs.gz >> /boot/config.txt
then create an rsa ssh key with the following line in front and save it to /etc/dropbear-initramfs/authorized_keys:
command="/scripts/local-top/cryptroot && kill -9 `ps | grep -m 1 'cryptroot' | cut -d ' ' -f 3`" ssh...
then:
mkinitramfs -o /boot/initramfs.gz
reboot
i can type the password and decrypt with a keyboard but when i login with ssh I get :
/scripts/local-top/cryptroot: line 218: modprobe: not found
/scripts/local-top/cryptroot: line 378: blkid: not found
/scripts/local-top/cryptroot: line 378: blkid: not found
/scripts/local-top/cryptroot: line 378: blkid: not found
/scripts/local-top/cryptroot: line 378: blkid: not found
/scripts/local-top/cryptroot: line 378: blkid: not found
...
please help?
extra info:
my blkid
/dev/sda1: UUID="42a9ca50-b757-4e11-985f-8fc75323b598" TYPE="ext2" PARTUUID="38de37f0-01"
/dev/sda5: UUID="3448b157-a1f9-4f6a-a1ea-37e6362cdea8" TYPE="crypto_LUKS" PARTUUID="38de37f0-05"
/dev/mapper/sda5_crypt: UUID="nzAaP7-Ocx9-BJzO-BM7S-SQcY-BHqp-tbgvH6" TYPE="LVM2_member"
/dev/mapper/deb--vg-root: UUID="f8ec5b07-75fe-4870-9fb6-9e9035d21a20" TYPE="ext4"
/dev/mapper/deb--vg-swap_1: UUID="ff915ae6-210f-4bbb-8988-b30aacae3dea" TYPE="swap"
my /etc/fstab
/dev/sda1: UUID="42a9ca50-b757-4e11-985f-8fc75323b598" TYPE="ext2" PARTUUID="38de37f0-01"
/dev/sda5: UUID="3448b157-v4a5-4f6a-a1ea-28e6362cdea9" TYPE="crypto_LUKS" PARTUUID="38de37f0-05"
/dev/sdb1: UUID="CC55-BAFE" TYPE="vfat" PARTUUID="0000370e-01" /dev/mapper/sda5_crypt: UUID="nzAaP7-Ocx9-BJzO-BM7S-SQcY-BHqp-tbgvH6" TYPE="LVM2_member"
/dev/mapper/theproject1--vg-root: UUID="f8ec5b07-75fe-4870-9fb6-9e9035d21a20" TYPE="ext4"
/dev/mapper/theproject1--vg-swap_1: UUID="ff915ae6-210f-4bbb-8988-b30aacae3dea" TYPE="swap"
my /etc/crypttab
sda5_crypt UUID=3448b157-v4a5-4f6a-a1ea-28e6362cdea9 none luks,discard
I had a similar issue, but different resolution.
I have a encrypted Kali Linux computer and use the dropbear-initramfs package to unlock the root disk at boot remotely over the dropbear ssh session using cryptroot-unlock.
This worked fine for a long time, but after some update it stopped working. I could SSH to the initramfs disk dropbear session but when I did "cryptroot-unlock" it would hang and I would get the error "Error: Timeout reached while waiting for askpass."
Eventually I found the solution:-
https://www.mail-archive.com/debian-bugs-dist#lists.debian.org/msg1687842.html
Basically to fix I uninstalled "cryptsetup-nuke-password" package "apt remove cryptsetup-nuke-password". Then rebooted and then I could successfully open the encrypted /root partition remotely via the dropbear-initramfs.
If you are getting the error "Error: Timeout reached while waiting for askpass." when ssh'ed to your dropbear-initramfs at boot, trying to cryptroot-unlock the drive. To confirm if the issue is the same as mine then do a "ps" and look through the processes running. If see something similar to "/lib/cryptsetup/askpass.cryptsetup Please unlock disk .." then I suggest logging on via the console and removing "cryptsetup-nuke-password", to resolve the issue.

Rsync command not working

I am trying to run rsync as follows and running into error sshpass: Failed to run command: No such file or directory .I verified the source /local/mnt/workspace/common/sectool and destination directories/prj/qct/wlan_rome_su_builds are available and accessible?what am I missing?how to fix this?
username#xxx-machine-02:~$ sshpass –p 'password' rsync –progress –avz –e ssh /local/mnt/workspace/common/sectool cnssbldsw#hydwclnxbld4:/prj/qct/wlan_rome_su_builds
sshpass: Failed to run command: No such file or directory
Would that be possible for you to check whether 'rsync' works without 'sshpass'?
Also, check whether the ports used by rsync is enabled. You can find the port info via cat /etc/services | grep rsync
The first thing is to make sure that ssh connection is working smoothly. You can check this via "sudo ssh -vvv cnssbldsw#hydwclnxbld4" (please post the message). In advance, If you are to receive any messages such as "ssh: connect to host hydwclnxbld4 port 22: Connection refused", the issue is with the openssh-server (not being installed or a broken package). Let's see what you get you get for the first command

Error happens when trying to umount the Lustre file system

When I umount Lustre FS it displays:
[root#cn17663-ens4 mnt]# umount /mnt/lustre
umount: /mnt/lustre: target is busy.
(In some cases useful info about processes that use
the device is found by lsof(8) or fuser(1))
and if I add the force option -f it gives the same result:
[root#cn17663-ens4 mnt]# umount /mnt/lustre -f
umount: /mnt/lustre: target is busy.
(In some cases useful info about processes that use
the device is found by lsof(8) or fuser(1))
When I try to list the directory it gives me :
[root#cn17663-ens4 mnt]# ls
ls: cannot access lustre: Cannot send after transport endpoint shutdown
lustre
and I cannot find what the reason is and cannot solve it.
Did you actually try running lsof /mnt/lustre (as the error message recommends) to see what is using the filesystem? This problem is not unique to Lustre, but true of any local filesystem as well - if there is a process using the filesystem (current working directory or open file) then it can't be unmounted until that process stops using it (cd out of /mnt/lustre or close the open file(s)).
I find I can use umount -l /mnt/xx to solve this problem!

Memory issue with meteor up (mup) on Digital Ocean

I couldn't find existing posts related to my issue. On a Digital Ocean Droplet, mup setup went fine, but when I try to deploy, I get the following error. Any ideas? Thanks!
root#ts:~/ts-deploy# mup deploy
Meteor Up: Production Quality Meteor Deployments
Building Started: /root/TS/
Bundling Error: code=137, error:
-------------------STDOUT-------------------
Figuring out the best package versions to use. This may take a moment.
-------------------STDERR-------------------
bash: line 1: 31217 Killed meteor build --directory /tmp/dc37af3e-eca0-4a19-bf1a-d6d38bb8f517
Below are the logs. node -v indicates I am using 0.10.31. How do I check which script is exiting with the error? Any other ideas? Thanks!
error: Forever detected script exited with code: 1
error: Script restart attempt #106
Meteor requires Node v0.10.29 or later.
error: Forever detected script exited with code: 1
error: Script restart attempt #107
Meteor requires Node v0.10.29 or later.
error: Forever detected script exited with code: 1
error: Script restart attempt #108
stepping down to gid: meteoruser
stepping down to uid: meteoruser
After I went back to an old backup of the DO Droplet, and re-ran mup setup and mup deploy, I now get this in the command line output
Building Started: /root/TS
Bundling Error: code=134, error:
-------------------STDOUT-------------------
Figuring out the best package versions to use. This may take a moment.
-------------------STDERR-------------------
FATAL ERROR: JS Allocation failed - process out of memory
bash: line 1: 1724 Aborted (core dumped) meteor build --directory /tmp/bfdbcb45-9c61-435f-9875-3fb304358996
and this in the logs:
>> stepping down to gid: meteoruser
>> stepping down to uid: meteoruser
Exception while invoking method 'login' TypeError: Cannot read property '0' of undefined
at ServiceConfiguration.configurations.remove.service (app/server/accounts.js:7:26)
at Object.Accounts.insertUserDoc (packages/accounts-base/accounts_server.js:1024)
at Object.Accounts.updateOrCreateUserFromExternalService (packages/accounts-base/accounts_server.js:1189)
at Package (packages/accounts-oauth/oauth_server.js:45)
at packages/accounts-base/accounts_server.js:383
at tryLoginMethod (packages/accounts-base/accounts_server.js:186)
at runLoginHandlers (packages/accounts-base/accounts_server.js:380)
at Meteor.methods.login (packages/accounts-base/accounts_server.js:434)
at maybeAuditArgumentChecks (packages/ddp/livedata_server.js:1594)
at packages/ddp/livedata_server.js:648
Exception while invoking method 'login' TypeError: Cannot read property '0' of undefined
at ServiceConfiguration.configurations.remove.service (app/server/accounts.js:7:26)
at Object.Accounts.insertUserDoc (packages/accounts-base/accounts_server.js:1024)
at Object.Accounts.updateOrCreateUserFromExternalService (packages/accounts-base/accounts_server.js:1189)
at Package (packages/accounts-oauth/oauth_server.js:45)
at packages/accounts-base/accounts_server.js:383
at tryLoginMethod (packages/accounts-base/accounts_server.js:186)
at runLoginHandlers (packages/accounts-base/accounts_server.js:380)
at Meteor.methods.login (packages/accounts-base/accounts_server.js:434)
at maybeAuditArgumentChecks (packages/ddp/livedata_server.js:1594)
at packages/ddp/livedata_server.js:648
The memory issue stems from using DigitalOcean's $5 Droplet. To solve the problem, I added swap to the server, as explained in detail below.
Create and enable the swap file using the dd command :
sudo dd if=/dev/zero of=/swapfile bs=1024 count=256k
“of=/swapfile” designates the file’s name. In this case the name is swapfile.
Next prepare the swap file by creating a linux swap area:
sudo mkswap /swapfile
The results display:
Setting up swapspace version 1, size = 262140 KiB
no label, UUID=103c4545-5fc5-47f3-a8b3-dfbdb64fd7eb
Finish up by activating the swap file:
sudo swapon /swapfile
You will then be able to see the new swap file when you view the swap summary.
swapon -s
Filename Type Size Used Priority
/swapfile file 262140 0 -1
This file will last on the virtual private server until the machine reboots. You can ensure that the swap is permanent by adding it to the fstab file.
Open up the file:
sudo nano /etc/fstab
Paste in the following line:
/swapfile none swap sw 0 0
Swappiness in the file should be set to 10. Skipping this step may cause both poor performance, whereas setting it to 10 will cause swap to act as an emergency buffer, preventing out-of-memory crashes.
You can do this with the following commands:
echo 10 | sudo tee /proc/sys/vm/swappiness
echo vm.swappiness = 10 | sudo tee -a /etc/sysctl.conf
To prevent the file from being world-readable, you should set up the correct permissions on the swap file:
sudo chown root:root /swapfile
sudo chmod 0600 /swapfile
This only worked for me by increasing the swap space to 1gb:
Make all swap off
sudo swapoff -a
Resize the swapfile
sudo dd if=/dev/zero of=/swapfile bs=1M count=1024
Make swapfile usable
sudo mkswap /swapfile
Make swapon again
sudo swapon /swapfile

What cause the error "Couldn't canonicalise: No such file or directory" in SFTP?

I am trying to use SFTP to upload the entire directory to remote host but I got a error.(I know SCP does work, but I really want to figure out the problem of SFTP.)
I used the command as below:
(echo "put -r LargeFile/"; echo quit)|sftp -vb - username#remotehost:TEST/
But I got the error "Couldn't canonicalise: No such file or directory""Unable to canonicalise path "/home/s1238262/TEST/LargeFile"
I thought it was caused by access rights. So, I opened a SFTP connection to the remote host in interactive mode and tried to create a new directory "LargeFile" in TEST/. And I succeeded. Then, I used the same command as above to uploading the entire directory "LargeFile". I also succeeded. The subdirectories in LargeFile were create or copied automatically.
So, I am confused. It seems only the LargeFile/ directory cannot be created in non-interactive mode. What's wrong with it or my command?
With SFTP you can only copy if the directory exists. So
> mkdir LargeFile
> put -r path_to_large_file/LargeFile
Same as the advice in the link from #Vidhuran but this should save you some reading.
This error could possibly occur because of the -r option. Refer https://unix.stackexchange.com/questions/7004/uploading-directories-with-sftp
A better way is through using scp.
scp -r LargeFile/"; echo quit)|sftp -vb - username#remotehost:TEST/
The easiest way for me was to zip my folder on local LargeFile.zip and simply put LargeFile.zip
zip -r LargeFile.zip LargeFile
sftp www.mywebserver.com (or ip of the webserver)
put LargeFile.zip (it will be on your remote server local directory)
unzip Largefile.zip
If you are using Ubuntu 14.04, the sftp has a bug. If you have the '/' added to the file name, you will get the Couldn't canonicalize: Failure error.
For example:
sftp> cd my_inbox/ ##will give you an error
sftp> cd my_inbox ##will NOT give you the error
Notice how the forward-slash is missing in the correct request. The forward slash appears when you use the TAB key to auto-populate the names in the path.

Resources