Command failed when running ghost start - ghost-blog

Fresh install, trying to run ghost start I get the following error:
Debug Information:
OS: Raspbian, v8.0
Node Version: v6.13.0
Ghost-CLI Version: 1.5.2
Environment: production
Command: 'ghost start'
An error occurred.
Message: 'Command failed: /bin/sh -c systemctl is-active ghost_blog-dev
unknown
'
Stack: Error: Command failed: /bin/sh -c systemctl is-active ghost_blog-dev
unknown
at makeError (/usr/lib/node_modules/ghost-cli/node_modules/execa/index.js:169:9)
at module.exports.sync (/usr/lib/node_modules/ghost-cli/node_modules/execa/index.js:338:15)
at handleShell (/usr/lib/node_modules/ghost-cli/node_modules/execa/index.js:116:9)
at Function.module.exports.shellSync (/usr/lib/node_modules/ghost-cli/node_modules/execa/index.js:361:43)
at SystemdProcessManager.isRunning (/usr/lib/node_modules/ghost-cli/extensions/systemd/systemd.js:88:19)
at Instance.running (/usr/lib/node_modules/ghost-cli/lib/instance.js:120:34)
at StartCommand.run (/usr/lib/node_modules/ghost-cli/lib/commands/start.js:28:22)
at precheck.then (/usr/lib/node_modules/ghost-cli/lib/command.js:159:52)
at process._tickCallback (internal/process/next_tick.js:109:7)
at Module.runMain (module.js:613:11)
at run (bootstrap_node.js:387:7)
at startup (bootstrap_node.js:153:9)
at bootstrap_node.js:500:3
Code: 3
If I manually run the command that is says failed it seems to execute without error though I am not sure what it does. I assume it has something to do with checking with nginx if ghost has actually started or not.
Any suggestions would be very helpful! Thank you!

I faced with the same problem. You should add your service file to etc directory also as symbolic link using the following command:
sudo ln -sf /var/www/html/your-blog/system/files/ghost_blog-yourblog.service /etc/systemd/system/ghost_blog-yourblog.service
After adding this, you should execute the following commands:
sudo systemctl stop ghost_blog-yourblog.service
sudo systemctl start ghost_blog-yourblog.service
Then, I hope you'll see 'active' result for is-active command.

Related

ShinyProxy Euler App not running - Failed to start container

I am new to docker and ShinyProxy. I was following the steps from https://www.shinyproxy.io/. All is working fine with the Hello-world and 06_tabset apps. Then I have build the Image for the Euler App which is not working when opening the app from the browser.
Error
Status code: 500
Message: Failed to start container
Stack Trace:
eu.openanalytics.containerproxy.ContainerProxyException: Failed to start container
EDIT:
The actual error is:
Caused by: com.spotify.docker.client.exceptions.DockerRequestException: Request error: POST http://localhost:2375/containers/create: 400, body: {"message":"No command specified"}
also when I try to just run the App it is not working with:
sudo docker run -p 3838:3838 openanalytics/shinyproxy-demo R -e 'shiny::runApp('/root/euler')'
the Error I get is:
shiny::runApp(/root/euler)
Error: unexpected '/' in "shiny::runApp(/"
Execution halted
and after changing it to:
sudo docker run -p 3838:3838 openanalytics/shinyproxy-demo R -e 'shiny::runApp('root/euler')'
I get this:
Error in as.shiny.appobj(appDir) : object 'root' not found
Calls: <Anonymous> -> as.shiny.appobj
Execution halted
I think the problem could be that the Image is openanalytics/shinyproxy-template and not openanalytics/shinyproxy-demo.
try:
sudo docker run -p 3838:3838 openanalytics/shinyproxy-template R -e 'shiny::runApp("/root/euler")'
just try this :
sudo docker run -p 3838:3838 openanalytics/shinyproxy-demo R -e 'shiny::runApp("/root/euler")'
the problem is that you use ' twice in your command that is why the app sees that as 'shiny::runApp(/'
and make sure that Shiny application exists at the path "/root/euler".

Error: Error trying install composer runtime. Error: Connect Failed

Prog:dist abhishek$ composer network deploy -a my-network.bna -p hlfv1 -i PeerAdmin -s randomString
Deploying business network from archive: my-network.bna
Business network definition:
Identifier: my-network#0.1.6
Description: My Commodity Trading network
✖ Deploying business network definition. This may take a minute...
Error: Error trying deploy. Error: Error trying install composer runtime. Error: Connect Failed
Command failed
when trying to install the composer runtime,returns
Prog:dist abhishek$ composer runtime install -n my-network -p hlfv1 -i PeerAdmin -s randomString
✖ Installing runtime for business network my-network. This may take a minute...
Error: Error trying install composer runtime. Error: Connect Failed
Command failed
I've been working through the Hyperledger Composer tutorial (https://hyperledger.github.io/composer/tutorials/developer-guide.html) on an older Mac, running OS X Mavericks 10.9.5, which means I'm using Docker Toolbox instead of Docker for Mac. I encountered the same error message when deploying the sample Trading network .bna file on my local dev environment Fabric network.
Here is the command in Terminal:
$ composer network deploy -a my-network.bna -p hlfv1 -i PeerAdmin -s randomString -A admin -S
And here is the error log:
Error: Error trying deploy. Error: Error trying install composer runtime. Error: Connect Failed
In my case, it was because Docker Toolkit answers to on an IP address assigned when you start docker, instead of localhost, 127.0.0.1, etc.
If you are also using Docker Toolkit and are getting the same error, first find the docker IP number, which should be listed under the Docker Whale logo in Terminal when you started it, and then edit the following files (TextEdit should be fine), changing all references to localhost and 127.0.0.1 to the IP number (leave the ports, such as :7050, there):
fabric-tools/fabric-scripts/hlfv1/composer/configtx.yaml
fabric-tools/fabric-scripts/hlfv1/composer/docker-compose.yml
fabric-tools/fabric-scripts/hlfv1/createComposerProfile.sh
fabric-tools/fabric-scripts/hlfv1/createPeerAdminCard.sh
Then, back in Terminal, navigate back to fabric-tools, and if Fabric is already started, stop it, and then recreate the Composer Profile, as documented:
$ ./stopFabric.sh
$ ./createComposerProfile.sh
The log should now show the Docker Toolkit IP for the orderers, CA and peers. Now restart Fabric:
$ ./startFabric.sh
Navigate back to fabric-tools/my-network/dist and re-run the compose command, and if all goes well, it should connect properly.
Is your Fabric running? What is the output of docker ps?
Try doing the next:
Pick a directory that you want and install Hyperledger Fabric and Hyperledger Composer Playground running:
curl -sSL https://hyperledger.github.io/composer/install-hlfv1.sh | bash
Then run your command.
Try the code below:
$composer runtime install -c PeerAdmin#hlfv1 -n basic
$composer network deploy -a basic.bna -A admin -S adminpw -c PeerAdmin#hlfv1 -f admincard

Sbt download fails when running assembly on ubuntu

I used the following instructions to install sbt:
http://www.scala-sbt.org/0.13/docs/Installing-sbt-on-Linux.html
The core commands are:
echo "deb https://dl.bintray.com/sbt/debian /" | sudo tee -a /etc/apt/sources.list.d/sbt.list
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 642AC823
sudo apt-get update
sudo apt-get install sbt
These succeeded.
Then when I try to use sbt assembly for the first time the following occurs:
[SUCCESSFUL ] org.scala-sbt#apply-macro;0.13.11!apply-macro.jar (1991ms)
:: problems summary ::
:::: WARNINGS
[FAILED ] org.scala-sbt#collections;0.13.11!collections.jar: Invalid TLS padding data (784ms)
[FAILED ] org.scala-sbt#collections;0.13.11!collections.jar: Invalid TLS padding data (784ms)
..
:: USE VERBOSE OR DEBUG MESSAGE LEVEL FOR MORE DETAILS
download failed: org.scala-sbt#collections;0.13.11!collections.jar
download failed: org.scala-sbt#incremental-compiler;0.13.11!incremental-compiler.jar
download failed: org.scala-sbt#compile;0.13.11!compile.jar
Error during sbt execution: Error retrieving required libraries
(see /home/stephen/.sbt/boot/update.log for complete log)
Error: Could not retrieve sbt 0.13.11
So .. any steps missing to make sbt happy? I am on ubuntu 14.0.4.1 LTS.
Apparently this was a transient connectivity issue. After waiting some time and re-trying sbt assembly was able to download the missing components and then succeed.

Memory issue with meteor up (mup) on Digital Ocean

I couldn't find existing posts related to my issue. On a Digital Ocean Droplet, mup setup went fine, but when I try to deploy, I get the following error. Any ideas? Thanks!
root#ts:~/ts-deploy# mup deploy
Meteor Up: Production Quality Meteor Deployments
Building Started: /root/TS/
Bundling Error: code=137, error:
-------------------STDOUT-------------------
Figuring out the best package versions to use. This may take a moment.
-------------------STDERR-------------------
bash: line 1: 31217 Killed meteor build --directory /tmp/dc37af3e-eca0-4a19-bf1a-d6d38bb8f517
Below are the logs. node -v indicates I am using 0.10.31. How do I check which script is exiting with the error? Any other ideas? Thanks!
error: Forever detected script exited with code: 1
error: Script restart attempt #106
Meteor requires Node v0.10.29 or later.
error: Forever detected script exited with code: 1
error: Script restart attempt #107
Meteor requires Node v0.10.29 or later.
error: Forever detected script exited with code: 1
error: Script restart attempt #108
stepping down to gid: meteoruser
stepping down to uid: meteoruser
After I went back to an old backup of the DO Droplet, and re-ran mup setup and mup deploy, I now get this in the command line output
Building Started: /root/TS
Bundling Error: code=134, error:
-------------------STDOUT-------------------
Figuring out the best package versions to use. This may take a moment.
-------------------STDERR-------------------
FATAL ERROR: JS Allocation failed - process out of memory
bash: line 1: 1724 Aborted (core dumped) meteor build --directory /tmp/bfdbcb45-9c61-435f-9875-3fb304358996
and this in the logs:
>> stepping down to gid: meteoruser
>> stepping down to uid: meteoruser
Exception while invoking method 'login' TypeError: Cannot read property '0' of undefined
at ServiceConfiguration.configurations.remove.service (app/server/accounts.js:7:26)
at Object.Accounts.insertUserDoc (packages/accounts-base/accounts_server.js:1024)
at Object.Accounts.updateOrCreateUserFromExternalService (packages/accounts-base/accounts_server.js:1189)
at Package (packages/accounts-oauth/oauth_server.js:45)
at packages/accounts-base/accounts_server.js:383
at tryLoginMethod (packages/accounts-base/accounts_server.js:186)
at runLoginHandlers (packages/accounts-base/accounts_server.js:380)
at Meteor.methods.login (packages/accounts-base/accounts_server.js:434)
at maybeAuditArgumentChecks (packages/ddp/livedata_server.js:1594)
at packages/ddp/livedata_server.js:648
Exception while invoking method 'login' TypeError: Cannot read property '0' of undefined
at ServiceConfiguration.configurations.remove.service (app/server/accounts.js:7:26)
at Object.Accounts.insertUserDoc (packages/accounts-base/accounts_server.js:1024)
at Object.Accounts.updateOrCreateUserFromExternalService (packages/accounts-base/accounts_server.js:1189)
at Package (packages/accounts-oauth/oauth_server.js:45)
at packages/accounts-base/accounts_server.js:383
at tryLoginMethod (packages/accounts-base/accounts_server.js:186)
at runLoginHandlers (packages/accounts-base/accounts_server.js:380)
at Meteor.methods.login (packages/accounts-base/accounts_server.js:434)
at maybeAuditArgumentChecks (packages/ddp/livedata_server.js:1594)
at packages/ddp/livedata_server.js:648
The memory issue stems from using DigitalOcean's $5 Droplet. To solve the problem, I added swap to the server, as explained in detail below.
Create and enable the swap file using the dd command :
sudo dd if=/dev/zero of=/swapfile bs=1024 count=256k
“of=/swapfile” designates the file’s name. In this case the name is swapfile.
Next prepare the swap file by creating a linux swap area:
sudo mkswap /swapfile
The results display:
Setting up swapspace version 1, size = 262140 KiB
no label, UUID=103c4545-5fc5-47f3-a8b3-dfbdb64fd7eb
Finish up by activating the swap file:
sudo swapon /swapfile
You will then be able to see the new swap file when you view the swap summary.
swapon -s
Filename Type Size Used Priority
/swapfile file 262140 0 -1
This file will last on the virtual private server until the machine reboots. You can ensure that the swap is permanent by adding it to the fstab file.
Open up the file:
sudo nano /etc/fstab
Paste in the following line:
/swapfile none swap sw 0 0
Swappiness in the file should be set to 10. Skipping this step may cause both poor performance, whereas setting it to 10 will cause swap to act as an emergency buffer, preventing out-of-memory crashes.
You can do this with the following commands:
echo 10 | sudo tee /proc/sys/vm/swappiness
echo vm.swappiness = 10 | sudo tee -a /etc/sysctl.conf
To prevent the file from being world-readable, you should set up the correct permissions on the swap file:
sudo chown root:root /swapfile
sudo chmod 0600 /swapfile
This only worked for me by increasing the swap space to 1gb:
Make all swap off
sudo swapoff -a
Resize the swapfile
sudo dd if=/dev/zero of=/swapfile bs=1M count=1024
Make swapfile usable
sudo mkswap /swapfile
Make swapon again
sudo swapon /swapfile

Can't complete the mup setup in Meteor-Up

I have been able to get past #mup setup. I get the following error;
Meteor Up: Production Quality Meteor Deployments
------------------------------------------------
Started TaskList: Setup (linux)
[212.1.213.20] - Installing Node.js
[212.1.213.20] â Installing Node.js: FAILED
-----------------------------------STDERR-----------------------------------
Warning: Permanently added '212.1.213.20' (RSA) to the list of known hosts.
stdin: is not a tty
sudo: sorry, you must have a tty to run sudo
sudo: sorry, you must have a tty to run sudo
sudo: sorry, you must have a tty to run sudo
sudo: sorry, you must have a tty to run sudo
-----------------------------------STDOUT-----------------------------------
----------------------------------------------------------------------------
Completed TaskList: Setup (linux)
I've found a lot about the error stdin: is not a tty but none of them make much sense to me.
Open your /etc/sudoers file, find the line that says Defaults requiretty, and change it to Defaults !requiretty.
This will disable the tty requirement globally.

Resources