Can't complete the mup setup in Meteor-Up - meteor

I have been able to get past #mup setup. I get the following error;
Meteor Up: Production Quality Meteor Deployments
------------------------------------------------
Started TaskList: Setup (linux)
[212.1.213.20] - Installing Node.js
[212.1.213.20] â Installing Node.js: FAILED
-----------------------------------STDERR-----------------------------------
Warning: Permanently added '212.1.213.20' (RSA) to the list of known hosts.
stdin: is not a tty
sudo: sorry, you must have a tty to run sudo
sudo: sorry, you must have a tty to run sudo
sudo: sorry, you must have a tty to run sudo
sudo: sorry, you must have a tty to run sudo
-----------------------------------STDOUT-----------------------------------
----------------------------------------------------------------------------
Completed TaskList: Setup (linux)
I've found a lot about the error stdin: is not a tty but none of them make much sense to me.

Open your /etc/sudoers file, find the line that says Defaults requiretty, and change it to Defaults !requiretty.
This will disable the tty requirement globally.

Related

OpenVAS installation and running errors

I've installed Greenbone Security Assistant Version 9.0.1 (OpenVAS) by this instruction on my VirtualBox's Ubuntu 20.4.
sudo apt install postgresql
sudo add-apt-repository ppa:mrazavi/gvm
sudo apt install gvm
greenbone-nvt-sync
sudo greenbone-scapdata-sync
sudo greenbone-certdata-sync
Unfortunately, it does not works.
When I'm trying to create a task by Wizard, I have the task completed just in moment, with an empty log. And that's all.
I've tried three commands:
systemctl status ospd-openvas # scanner
systemctl status gvmd # manager
systemctl status gsad # web ui
Everything is okay, except ospd-openvas. The status is green and active, but there are some errors too:
Jul 20 15:00:27 alex-VirtualBox ospd-openvas[833]: OSPD - openvas:
ERROR: (ospd_openvas.daemon) Failed to create feed lock file
/var/run/ospd/feed-update.lock. [Errno 2] No such file or directory:
'/var/run/ospd/feed-update.lock'
From the error message it looks like the directory /var/run/ospd/ does not exist.
Create the directory and try to restart the service.
In ubuntu 20.04 /var/run points to /run which is a temporary file system. That means that if you create the directory /var/run/ospd manually, it will be gone after the next reboot. To fix it permanently (in case the missing directory is the issue), please refer to this post.
This may help some people with some of the issues I've been facing:
mkdir -p /var/run/ospd/
touch /var/run/ospd/feed-update.lock
chown gvm:gvm /var/run/ospd/feed-update.lock

Command failed when running ghost start

Fresh install, trying to run ghost start I get the following error:
Debug Information:
OS: Raspbian, v8.0
Node Version: v6.13.0
Ghost-CLI Version: 1.5.2
Environment: production
Command: 'ghost start'
An error occurred.
Message: 'Command failed: /bin/sh -c systemctl is-active ghost_blog-dev
unknown
'
Stack: Error: Command failed: /bin/sh -c systemctl is-active ghost_blog-dev
unknown
at makeError (/usr/lib/node_modules/ghost-cli/node_modules/execa/index.js:169:9)
at module.exports.sync (/usr/lib/node_modules/ghost-cli/node_modules/execa/index.js:338:15)
at handleShell (/usr/lib/node_modules/ghost-cli/node_modules/execa/index.js:116:9)
at Function.module.exports.shellSync (/usr/lib/node_modules/ghost-cli/node_modules/execa/index.js:361:43)
at SystemdProcessManager.isRunning (/usr/lib/node_modules/ghost-cli/extensions/systemd/systemd.js:88:19)
at Instance.running (/usr/lib/node_modules/ghost-cli/lib/instance.js:120:34)
at StartCommand.run (/usr/lib/node_modules/ghost-cli/lib/commands/start.js:28:22)
at precheck.then (/usr/lib/node_modules/ghost-cli/lib/command.js:159:52)
at process._tickCallback (internal/process/next_tick.js:109:7)
at Module.runMain (module.js:613:11)
at run (bootstrap_node.js:387:7)
at startup (bootstrap_node.js:153:9)
at bootstrap_node.js:500:3
Code: 3
If I manually run the command that is says failed it seems to execute without error though I am not sure what it does. I assume it has something to do with checking with nginx if ghost has actually started or not.
Any suggestions would be very helpful! Thank you!
I faced with the same problem. You should add your service file to etc directory also as symbolic link using the following command:
sudo ln -sf /var/www/html/your-blog/system/files/ghost_blog-yourblog.service /etc/systemd/system/ghost_blog-yourblog.service
After adding this, you should execute the following commands:
sudo systemctl stop ghost_blog-yourblog.service
sudo systemctl start ghost_blog-yourblog.service
Then, I hope you'll see 'active' result for is-active command.

Error: Error trying install composer runtime. Error: Connect Failed

Prog:dist abhishek$ composer network deploy -a my-network.bna -p hlfv1 -i PeerAdmin -s randomString
Deploying business network from archive: my-network.bna
Business network definition:
Identifier: my-network#0.1.6
Description: My Commodity Trading network
✖ Deploying business network definition. This may take a minute...
Error: Error trying deploy. Error: Error trying install composer runtime. Error: Connect Failed
Command failed
when trying to install the composer runtime,returns
Prog:dist abhishek$ composer runtime install -n my-network -p hlfv1 -i PeerAdmin -s randomString
✖ Installing runtime for business network my-network. This may take a minute...
Error: Error trying install composer runtime. Error: Connect Failed
Command failed
I've been working through the Hyperledger Composer tutorial (https://hyperledger.github.io/composer/tutorials/developer-guide.html) on an older Mac, running OS X Mavericks 10.9.5, which means I'm using Docker Toolbox instead of Docker for Mac. I encountered the same error message when deploying the sample Trading network .bna file on my local dev environment Fabric network.
Here is the command in Terminal:
$ composer network deploy -a my-network.bna -p hlfv1 -i PeerAdmin -s randomString -A admin -S
And here is the error log:
Error: Error trying deploy. Error: Error trying install composer runtime. Error: Connect Failed
In my case, it was because Docker Toolkit answers to on an IP address assigned when you start docker, instead of localhost, 127.0.0.1, etc.
If you are also using Docker Toolkit and are getting the same error, first find the docker IP number, which should be listed under the Docker Whale logo in Terminal when you started it, and then edit the following files (TextEdit should be fine), changing all references to localhost and 127.0.0.1 to the IP number (leave the ports, such as :7050, there):
fabric-tools/fabric-scripts/hlfv1/composer/configtx.yaml
fabric-tools/fabric-scripts/hlfv1/composer/docker-compose.yml
fabric-tools/fabric-scripts/hlfv1/createComposerProfile.sh
fabric-tools/fabric-scripts/hlfv1/createPeerAdminCard.sh
Then, back in Terminal, navigate back to fabric-tools, and if Fabric is already started, stop it, and then recreate the Composer Profile, as documented:
$ ./stopFabric.sh
$ ./createComposerProfile.sh
The log should now show the Docker Toolkit IP for the orderers, CA and peers. Now restart Fabric:
$ ./startFabric.sh
Navigate back to fabric-tools/my-network/dist and re-run the compose command, and if all goes well, it should connect properly.
Is your Fabric running? What is the output of docker ps?
Try doing the next:
Pick a directory that you want and install Hyperledger Fabric and Hyperledger Composer Playground running:
curl -sSL https://hyperledger.github.io/composer/install-hlfv1.sh | bash
Then run your command.
Try the code below:
$composer runtime install -c PeerAdmin#hlfv1 -n basic
$composer network deploy -a basic.bna -A admin -S adminpw -c PeerAdmin#hlfv1 -f admincard

Unable to automate script: sudo: no tty present and no askpass program specified

I would like to automate a build - for now, during my development, so no security stuff involved.
I have created a script that moves libs to /usr/local/lib and issues ldd command.
These things require sudo.
Running the script from the builder (Qt Creator), I am not prompted to enter my sudo password, and I get the error
sudo: no tty present and no askpass program specified
Sorry, try again.
I have found a few solutions to this but it just did not work... what am I missing ?
Exact code:
in myLib.pro
#temporary to make my life easier
QMAKE_POST_LINK = /home/me/move_libs_script
in move_libs_script:
#!/bin/bash
sudo cp $HOME/myLib/myLib.so.1 /usr/local/lib/
sudo ldconfig
I did as suggested by the answer above: edited visudo and added the script... even added qmake...
sudo visudo
added at the end:
me ALL=NOPASSWD: /home/me/move_libs_script, /usr/bin/qmake-qt4
It saved file: /etc/sudoers.tmp (and doing the command sudo visudo again I saw that my changes were kept so I am not sure what is with the tmp)
Still same errors
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: 3 incorrect password attempts
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: no tty present and no askpass program specified
Sorry, try again.
sudo: 3 incorrect password attempts
Edit: after asking the question I found a suggested similar question:
https://stackoverflow.com/a/10668693/1217150
So I tried to add a custom step...
Result:
09:50:03: Running build steps for project myLib...
09:50:03: Could not start process "ssh-askpass Sudo Password | sudo -S bash ./move_libs_script"
Error while building project myLib (target: Desktop)
When executing build step 'Custom Process Step'
(if I run from terminal I get asked for password)
New edit: so I thought I can outsmart the system and call a script that calls my script...
myLib.pro
QMAKE_POST_LINK = /home/me/sudo_move_libs_script
sudo_move_libs_script:
#!/bin/bash
ssh-askpass Sudo Password | sudo -S bash $HOME/move_libs_script
got it !!! I will post as answer i guess
New edit as answer to comment:
in mainExe.pro:
QMAKE_POST_LINK = ./link_lib
in link_lib:
#!/bin/sh
LD_LIBRARY_PATH=$HOME/myLib
export LD_LIBRARY_PATH
Result: Executable fails because the lib is not to be found (of course before testing I emoved the copy from /usr/local/lib)
Solution 1:
To run the command I needed with sudo from Qt, and be asked for password
myLib.pro
QMAKE_POST_LINK = /home/me/sudo_move_libs_script
sudo_move_libs_script:
#!/bin/bash
ssh-askpass Sudo Password | sudo -S bash $HOME/move_libs_script
move_libs_script:
#!/bin/bash
cp $HOME/myLib/myLib.so.1 /usr/local/lib/
ldconfig
Solution 2
Avoid using sudo: run a custom executable from Qt Creator (just like I would when project is deployed) - I can execute my program with correct dependency. Unfortunately it does not work for debug.
In the Run Configuration, place a script "dummy_executable" instead the app executable:
dummy_executable:
#!/bin/sh
LD_LIBRARY_PATH=$HOME/myLib
export LD_LIBRARY_PATH
$HOME/myExec/myExec "$#
This method is great for running the program ... unfortunately gdb fails... I think it tries to debug the shell script ? So it won't work if I try to step through my program.
Choosing Solution 1, because it has advantages: allows me to build and debug the program, which is what I really need, and with the dependencies set I get the library built before running the program - which is perfect.
Thank you Etan Reisner

Memory issue with meteor up (mup) on Digital Ocean

I couldn't find existing posts related to my issue. On a Digital Ocean Droplet, mup setup went fine, but when I try to deploy, I get the following error. Any ideas? Thanks!
root#ts:~/ts-deploy# mup deploy
Meteor Up: Production Quality Meteor Deployments
Building Started: /root/TS/
Bundling Error: code=137, error:
-------------------STDOUT-------------------
Figuring out the best package versions to use. This may take a moment.
-------------------STDERR-------------------
bash: line 1: 31217 Killed meteor build --directory /tmp/dc37af3e-eca0-4a19-bf1a-d6d38bb8f517
Below are the logs. node -v indicates I am using 0.10.31. How do I check which script is exiting with the error? Any other ideas? Thanks!
error: Forever detected script exited with code: 1
error: Script restart attempt #106
Meteor requires Node v0.10.29 or later.
error: Forever detected script exited with code: 1
error: Script restart attempt #107
Meteor requires Node v0.10.29 or later.
error: Forever detected script exited with code: 1
error: Script restart attempt #108
stepping down to gid: meteoruser
stepping down to uid: meteoruser
After I went back to an old backup of the DO Droplet, and re-ran mup setup and mup deploy, I now get this in the command line output
Building Started: /root/TS
Bundling Error: code=134, error:
-------------------STDOUT-------------------
Figuring out the best package versions to use. This may take a moment.
-------------------STDERR-------------------
FATAL ERROR: JS Allocation failed - process out of memory
bash: line 1: 1724 Aborted (core dumped) meteor build --directory /tmp/bfdbcb45-9c61-435f-9875-3fb304358996
and this in the logs:
>> stepping down to gid: meteoruser
>> stepping down to uid: meteoruser
Exception while invoking method 'login' TypeError: Cannot read property '0' of undefined
at ServiceConfiguration.configurations.remove.service (app/server/accounts.js:7:26)
at Object.Accounts.insertUserDoc (packages/accounts-base/accounts_server.js:1024)
at Object.Accounts.updateOrCreateUserFromExternalService (packages/accounts-base/accounts_server.js:1189)
at Package (packages/accounts-oauth/oauth_server.js:45)
at packages/accounts-base/accounts_server.js:383
at tryLoginMethod (packages/accounts-base/accounts_server.js:186)
at runLoginHandlers (packages/accounts-base/accounts_server.js:380)
at Meteor.methods.login (packages/accounts-base/accounts_server.js:434)
at maybeAuditArgumentChecks (packages/ddp/livedata_server.js:1594)
at packages/ddp/livedata_server.js:648
Exception while invoking method 'login' TypeError: Cannot read property '0' of undefined
at ServiceConfiguration.configurations.remove.service (app/server/accounts.js:7:26)
at Object.Accounts.insertUserDoc (packages/accounts-base/accounts_server.js:1024)
at Object.Accounts.updateOrCreateUserFromExternalService (packages/accounts-base/accounts_server.js:1189)
at Package (packages/accounts-oauth/oauth_server.js:45)
at packages/accounts-base/accounts_server.js:383
at tryLoginMethod (packages/accounts-base/accounts_server.js:186)
at runLoginHandlers (packages/accounts-base/accounts_server.js:380)
at Meteor.methods.login (packages/accounts-base/accounts_server.js:434)
at maybeAuditArgumentChecks (packages/ddp/livedata_server.js:1594)
at packages/ddp/livedata_server.js:648
The memory issue stems from using DigitalOcean's $5 Droplet. To solve the problem, I added swap to the server, as explained in detail below.
Create and enable the swap file using the dd command :
sudo dd if=/dev/zero of=/swapfile bs=1024 count=256k
“of=/swapfile” designates the file’s name. In this case the name is swapfile.
Next prepare the swap file by creating a linux swap area:
sudo mkswap /swapfile
The results display:
Setting up swapspace version 1, size = 262140 KiB
no label, UUID=103c4545-5fc5-47f3-a8b3-dfbdb64fd7eb
Finish up by activating the swap file:
sudo swapon /swapfile
You will then be able to see the new swap file when you view the swap summary.
swapon -s
Filename Type Size Used Priority
/swapfile file 262140 0 -1
This file will last on the virtual private server until the machine reboots. You can ensure that the swap is permanent by adding it to the fstab file.
Open up the file:
sudo nano /etc/fstab
Paste in the following line:
/swapfile none swap sw 0 0
Swappiness in the file should be set to 10. Skipping this step may cause both poor performance, whereas setting it to 10 will cause swap to act as an emergency buffer, preventing out-of-memory crashes.
You can do this with the following commands:
echo 10 | sudo tee /proc/sys/vm/swappiness
echo vm.swappiness = 10 | sudo tee -a /etc/sysctl.conf
To prevent the file from being world-readable, you should set up the correct permissions on the swap file:
sudo chown root:root /swapfile
sudo chmod 0600 /swapfile
This only worked for me by increasing the swap space to 1gb:
Make all swap off
sudo swapoff -a
Resize the swapfile
sudo dd if=/dev/zero of=/swapfile bs=1M count=1024
Make swapfile usable
sudo mkswap /swapfile
Make swapon again
sudo swapon /swapfile

Resources