Rails production - all pictures are broken after new deploy - nginx

I followed the Ryan's screencast and deployed to VPS. So i use Unicorn + nginx + github + Ubuntu 12.04 LTS + capistrano. Also i use i18n to translate application.
I also would like to notice that i use Carriarewave for picture uploading. Carriarewave keeps pictures on VPS locally. When i upload images it works great and uploaded pictures appear.
But each time when i deploy new changes to the server then ALL my pictures become broken. It's really awfull. I tried manually to restart nginx:
sudo service nginx restart
And i tried to restart Unicorn:
/etc/init.d/unicorn_Chirch_app restart
It doesn't help too.
When i try to open my broken page manually it says:
The page you were looking for doesn't exist.
You may have mistyped the address or the page may have moved.
When i try to find pictures in console:
> Photo.all
> => [#<Photo id: 3, description: nil, created_at: "2013-01-21 11:14:01", updated_at: "2013-01-21 11:14:01", image: "1320700703588.jpg">, #<Photo id: 4, description: nil, created_at: "2013-01-21 11:14:01", updated_at: "2013-01-21 11:14:01", image: "Seasonscape_by_alexiuss.jpg">, #<Photo id: 5, description: nil, created_at: "2013-01-21 11:30:30", updated_at: "2013-01-21 11:30:30", image: "Seasonscape_by_alexiuss.jpg">]
As i understand they should exist.
Error from logs:
Started GET "/ru/uploads%2Fphoto%2Fimage%2F4%2FSeasonscape_by_alexiuss" for 89.178.205.47 at 2013-01-21 11:31:17 +0000
ActionController::RoutingError (No route matches [GET] "/ru/uploads%2Fphoto%2Fimage%2F4%2FSeasonscape_by_alexiuss"):
actionpack (3.2.8) lib/action_dispatch/middleware/debug_exceptions.rb:21:in `call'
actionpack (3.2.8) lib/action_dispatch/middleware/show_exceptions.rb:56:in `call'
railties (3.2.8) lib/rails/rack/logger.rb:26:in `call_app'
railties (3.2.8) lib/rails/rack/logger.rb:16:in `call'
actionpack (3.2.8) lib/action_dispatch/middleware/request_id.rb:22:in `call'
rack (1.4.4) lib/rack/methodoverride.rb:21:in `call'
rack (1.4.4) lib/rack/runtime.rb:17:in `call'
activesupport (3.2.8) lib/active_support/cache/strategy/local_cache.rb:72:in `call'
rack (1.4.4) lib/rack/lock.rb:15:in `call'
rack-cache (1.2) lib/rack/cache/context.rb:136:in `forward'
rack-cache (1.2) lib/rack/cache/context.rb:245:in `fetch'
rack-cache (1.2) lib/rack/cache/context.rb:185:in `lookup'
rack-cache (1.2) lib/rack/cache/context.rb:66:in `call!'
rack-cache (1.2) lib/rack/cache/context.rb:51:in `call'
railties (3.2.8) lib/rails/engine.rb:479:in `call'
railties (3.2.8) lib/rails/application.rb:223:in `call'
railties (3.2.8) lib/rails/railtie/configurable.rb:30:in `method_missing'
unicorn (4.5.0) lib/unicorn/http_server.rb:552:in `process_client'
unicorn (4.5.0) lib/unicorn/http_server.rb:628:in `worker_loop'
unicorn (4.5.0) lib/unicorn/http_server.rb:500:in `spawn_missing_workers'
unicorn (4.5.0) lib/unicorn/http_server.rb:511:in `maintain_worker_count'
unicorn (4.5.0) lib/unicorn/http_server.rb:277:in `join'
unicorn (4.5.0) bin/unicorn:121:in `<top (required)>'
/home/deployer/apps/My_app/shared/bundle/ruby/1.9.1/bin/unicorn:23:in `load'
/home/deployer/apps/My_app/shared/bundle/ruby/1.9.1/bin/unicorn:23:in `<main>'
My config/deploy.rb
require "bundler/capistrano"
server "my_ip_here", :web, :app, :db, primary: true
set :application, "My_app"
set :user, "deployer"
set :deploy_to, "/home/#{user}/apps/#{application}"
set :deploy_via, :remote_cache
set :use_sudo, false
set :scm, "git"
set :repository, "git#github.com:MyName/#{application}.git"
set :branch, "master"
default_run_options[:pty] = true
ssh_options[:forward_agent] = true
after "deploy", "deploy:cleanup" # keep only the last 5 releases
namespace :deploy do
%w[start stop restart].each do |command|
desc "#{command} unicorn server"
task command, roles: :app, except: {no_release: true} do
run "/etc/init.d/unicorn_#{application} #{command}"
end
end
task :setup_config, roles: :app do
sudo "ln -nfs #{current_path}/config/nginx.conf /etc/nginx/sites-enabled/#{application}"
sudo "ln -nfs #{current_path}/config/unicorn_init.sh /etc/init.d/unicorn_#{application}"
run "mkdir -p #{shared_path}/config"
put File.read("config/database.example.yml"), "#{shared_path}/config/database.yml"
puts "Now edit the config files in #{shared_path}."
end
after "deploy:setup", "deploy:setup_config"
task :symlink_config, roles: :app do
run "ln -nfs #{shared_path}/config/database.yml #{release_path}/config/database.yml"
end
after "deploy:finalize_update", "deploy:symlink_config"
desc "Make sure local git is in sync with remote."
task :check_revision, roles: :web do
unless `git rev-parse HEAD` == `git rev-parse origin/master`
puts "WARNING: HEAD is not the same as origin/master"
puts "Run `git push` to sync changes."
exit
end
end
before "deploy", "deploy:check_revision"
end

Ok, I found the solution. The problem appeared because I didn't change the default folder to keep images. You can find your default folder in public/uploads. That means that each cap deploy will create a new empty folder which doesn't contain your older files.
To fix this you should create another folder which doesn't live in your application. I choose the easiest way. I created symlinlk.
My steps:
1) On your server go to your app's shared folder (it generated automatically via capistrano). And then create your folder to keep new images:
$ mkdir uploads
2) Give necessary rights for a created folder:
$ sudo chmod 775 uploads
3) On your local machine in config/deploy.rb add:
task :symlink_config, roles: :app do
...
run "ln -nfs #{shared_path}/uploads #{release_path}/public/uploads"
end
4) Then push git and deploy:
$ git push
$ cap deploy:symlink
$ cap deploy
Now everything works fine.

Good one! I've extended your capistrano recipe.
# config/recipes/carrierwave.rb
namespace :carrierwave do
task :uploads_folder do
run "mkdir -p #{shared_path}/uploads"
run "#{sudo} chmod 775 #{shared_path}/uploads"
end
after 'deploy:setup', 'carrierwave:uploads_folder'
task :symlink do
run "ln -nfs #{shared_path}/uploads #{release_path}/public/uploads"
end
after 'deploy', 'carrierwave:symlink'
end

#ExiRe & #Charlie answer works on Capistrano 2.x. In Capistrano 3.x, command run replaced with command execute.
So, I solved this with the following steps:
Create a rake file in the directory lib/capistrano/tasks/carrierwave.rake with the content:
namespace :carrierwave do
task :uploads_folder do
on roles(:app) do
execute "mkdir -p #{shared_path}/uploads"
execute "#{sudo} chmod 775 #{shared_path}/uploads"
end
end
task :symlink do
on roles(:app) do
execute "ln -nfs #{shared_path}/uploads #{release_path}/public/uploads"
end
end
end
Add the following line at the end of your Capfile if it's doesn't have it.
Dir.glob('lib/capistrano/tasks/*.rake').each { |r| import r }
Add these tasks in config/deploy.rb with other tasks.
after 'deploy:publishing', 'carrierwave:uploads_folder'
after 'deploy:publishing', 'carrierwave:symlink'
namespace :deploy do
..
after 'deploy:publishing', 'carrierwave:uploads_folder'
after 'deploy:publishing', 'carrierwave:symlink'
..
end
Push git and deploy
Now your uploaded image will remain saved in shared/uploads folder after even new deploy.

Related

Need Ansible playbook inorder to calculate number of users currently login into VPN

Writing ansible playbook for "Count number of users currently login to VPN".Using Junos modules as suggested by network team.I have installed below softwares on my RHEL 7 machine with Ansible 2.9 version installed.
Junos Ansible Requirements
===============================
-->Install Dependencies
# pip install ncclient
# pip install junos-eznc
--> Install Juniper.junos Galaxy role
ansible-galaxy install juniper.junos
---> Have NETCONF enabled on Juniper devices over SSH
# set system services netconf ssh
--->(Optional)
#pip install junos-netconify (python lib for juniper console)
Whenever i am writing any playbook, I am getting below error.
Playbook:-
---
- name: Get device uptime
hosts:
- dc1
roles:
- Juniper.junos
connection: local
gather_facts: no
vars_prompt:
- name: username
prompt: Junos Username
private: no
- name: password
prompt: Junos Password
private: yes
tasks:
- name: get uptime using galaxy module
junos_command:
commands: show system uptime
register: uptime
- name: display uptimes
debug: var=uptime
Error:-
PLAY [Get device uptime] **************************************************************************************************************
TASK [get uptime using galaxy module] *************************************************************************************************
fatal: [172.16.130.1]: FAILED! => {"changed": false, "msg": "invalid rpc for running in check_mode"}
PLAY RECAP ****************************************************************************************************************************
172.16.130.1 : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
I was just exploring ansible networking commands & got above error.Please suggest what configurations required working with junos.
Please find below playbook to check no. of users currently login to VPN:-
name: Get system users currently logged in
hosts: all
connection: local
gather_facts: no
roles:
Juniper.junos
tasks:
name: Retrieve facts from device running Junos OS
juniper_junos_facts:
name: Print version
debug:
var: junos.fqdn
name: Run RPC Commands
juniper_junos_command:
commands="show security dynamic-vpn users"
format=text
dest={{ junos.fqdn }}.output

Unable to deploy meteor 1.9 on Digital Ocean using mup

I have been trying to deploy meteor 1.9 application on digital ocean droplet via mup but I am not able to do.
The issue occurs with sharp installation if I use abernix/meteord:base image.
If I use other image with a different node version I get bcrypt installation error.
This is my mup file.
module.exports = {
servers: {
one: {
// TODO: set host address, username, and authentication method
host: "server IP",
username: "root",
password: "my password"
// or neither for authenticate from ssh-agent
}
},
app: {
// TODO: change app name and path
name: "appName",
path: ".",
servers: {
one: {}
},
buildOptions: {
serverOnly: true
},
env: {
// TODO: Change to your app's url
// If you are using ssl, it needs to start with https://
PORT: 2010,
ROOT_URL: "my url",
MONGO_URL: "mongodb://mongodb/meteor",
MONGO_OPLOG_URL: "mongodb://mongodb/local"
},
docker: {
// change to 'abernix/meteord:base' if your app is using Meteor 1.4 - 1.5
image: "abernix/meteord:base",
prepareBundle: false
},
// Show progress bar while uploading bundle to server
// You might need to disable it on CI servers
enableUploadProgressBar: true
},
mongo: {
version: "3.4.1",
servers: {
one: {}
}
}
};
this is the error log if I use latest image abernix/meteord:node-12.14.0-base
[192.241.152.237]> core-js#2.6.11 postinstall /bundle/bundle/programs/server/npm/node_modules/#babel/runtime-corejs2/node_modules/core-js
[192.241.152.237]> node -e "try{require('./postinstall')}catch(e){}"
[192.241.152.237]
[192.241.152.237]
[192.241.152.237]> core-js#2.6.11 postinstall /bundle/bundle/programs/server/npm/node_modules/babel-runtime/node_modules/core-js
[192.241.152.237]> node -e "try{require('./postinstall')}catch(e){}"
[192.241.152.237]
[192.241.152.237]
[192.241.152.237]> bcrypt#4.0.1 install /bundle/bundle/programs/server/npm/node_modules/bcrypt
[192.241.152.237]> node-pre-gyp install --fallback-to-build
[192.241.152.237]
[192.241.152.237]node-pre-gyp WARN Using request for node-pre-gyp https download
[192.241.152.237][bcrypt] Success: "/bundle/bundle/programs/server/npm/node_modules/bcrypt/lib/binding/napi-v3/bcrypt_lib.node" is installed via remote
[192.241.152.237]
[192.241.152.237]> sharp#0.24.1 install /bundle/bundle/programs/server/npm/node_modules/sharp
[192.241.152.237]> (node install/libvips && node install/dll-copy && prebuild-install) || (node-gyp rebuild && node install/dll-copy)
[192.241.152.237]
[192.241.152.237]ERR! sharp 'darwin-x64' binaries cannot be used on the 'linux-x64' platform. Please remove the 'node_modules/sharp/vendor' directory and run 'npm install'.
[192.241.152.237]info sharp Attempting to build from source via node-gyp but this may fail due to the above error
[192.241.152.237]info sharp Please see https://sharp.pixelplumbing.com/install for required dependencies
[192.241.152.237]make: Entering directory '/bundle/bundle/programs/server/npm/node_modules/sharp/build'
[192.241.152.237] TOUCH Release/obj.target/libvips-cpp.stamp
[192.241.152.237] CXX(target) Release/obj.target/sharp/src/common.o
[192.241.152.237] CXX(target) Release/obj.target/sharp/src/metadata.o
[192.241.152.237] CXX(target) Release/obj.target/sharp/src/stats.o
[192.241.152.237] CXX(target) Release/obj.target/sharp/src/operations.o
[192.241.152.237] CXX(target) Release/obj.target/sharp/src/pipeline.o
[192.241.152.237] CXX(target) Release/obj.target/sharp/src/sharp.o
Correction:
After re-reading the issue, and looking at your error message closer it looks like the sharp binaries installed are for MacOS and it's trying to build them for Linux and potentially failing.
If you haven't already, you might try either destroying the current droplet and reusing mup to set it up, OR you could spin up Linux in a VM or a separate droplet and build on an equivalent system and then deploy from there.

Rails 2.3 app is not being published correctly on unicorn server (stylesheets get HTTP 404)

I am trying to deploy an old rails app to an unicorn server on my dev machine.
The problem is that the app is not running correctly because stylesheets are not displayed.
I am starting the server via bundler.
bundle exec unicorn
I, [2016-01-11T19:40:09.403219 #23668] INFO -- : listening on addr=0.0.0.0:8080 fd=5
I, [2016-01-11T19:40:09.403357 #23668] INFO -- : worker=0 spawning...
I, [2016-01-11T19:40:09.404184 #23668] INFO -- : master process ready
I, [2016-01-11T19:40:09.405295 #23681] INFO -- : worker=0 spawned pid=23681
I, [2016-01-11T19:40:09.405631 #23681] INFO -- : Refreshing Gem list
worker=0 ready
127.0.0.1 - - [11/Jan/2016 19:41:33] "GET / HTTP/1.1" 304 - 0.1429
127.0.0.1 - - [11/Jan/2016 19:41:33] "GET /stylesheets/main.css?1311631772 HTTP/1.1" 404 664 0.1346
The server log displays HTTP 404 for main.css and the app is being rendered without css styles!
When running on WEBrick server everything works fine, so it has to be a specific problem according to unicorn.
bundle exec script/server
=> Booting WEBrick
=> Rails 2.3.5 application starting on http://0.0.0.0:3000
=> Call with -d to detach
=> Ctrl-C to shutdown server
[2016-01-11 19:30:50] INFO WEBrick 1.3.1
[2016-01-11 19:30:50] INFO ruby 1.8.7 (2013-12-22) [i686-darwin14.5.0]
[2016-01-11 19:30:50] INFO WEBrick::HTTPServer#start: pid=23474 port=3000
Gemfile:
source 'https://rubygems.org'
ruby '1.8.7'
gem 'rails', '2.3.5'
gem 'warden', '0.10.3'
gem 'devise', '1.0.6'
gem 'delocalize', '~> 0.1.4'
gem 'rdoc'
gem 'mysql'
gem 'unicorn', '4.9.0'
I managed to solve the problem by using that monkey hack:
https://gist.github.com/defunkt/424352
For production I had to uncomment the if statements in line 38 and 71.

Meteor-Up failures on 2 different servers

I am experiencing failure every time I try to deploy to either an AWS/EC2 instance or a Digital Ocean droplet. With the EC2 instance I am using a pem file and with the droplet I am using password access. There were a lot of hoops to jump through to get through the mup setup, but that is finally successful on both instances. It's the mup deploy that fails on the Invoking deployment process: step.
Here's my mup.json for the droplet:
{
// Server authentication info
"servers": [
{
"host": "xx.xx.xx.xx",
"username": "root",
"password": "notmypassword"
// or pem file (ssh based authentication)
//"pem": "~/.ssh/id_rsa"
//"pem": "/users/alex/dropbox/awspems/projectmanager.pem"
}
],
// Install MongoDB in the server, does not destroy local MongoDB on future setup
"setupMongo": true,
// WARNING: Node.js is required! Only skip if you already have Node.js installed on server.
"setupNode": true,
// WARNING: If nodeVersion omitted will setup 0.10.33 by default. Do not use v, only version number.
"nodeVersion": "0.10.35",
// Install PhantomJS in the server
"setupPhantom": true,
// Application name (No spaces)
"appName": "projectmanager",
// Location of app (local directory)
"app": "/users/alex/dropbox/projectmanager",
// Configure environment
"env": {
"ROOT_URL": "http://xx.xx.xx.xx"
},
// Meteor Up checks if the app comes online just after the deployment
// before mup checks that, it will wait for no. of seconds configured below
"deployCheckWaitTime": 30
}
The following messages appear whether in the EC2 instance or the droplet instance:
Claire-MacAir-7:projectmanager alex$ mup deploy
Meteor Up: Production Quality Meteor Deployments
------------------------------------------------
“ Checkout Kadira!
It's the best way to monitor performance of your app.
Visit: https://kadira.io/mup ”
Building Started: /users/notmyusername/dropbox/projectmanager
Started TaskList: Deploy app 'projectmanager' (linux)
[xx.xx.xx.xx] - Uploading bundle
[xx.xx.xx.xx] ✔ Uploading bundle: SUCCESS
[xx.xx.xx.xx] - Setting up Environment Variables
[xx.xx.xx.xx] ✔ Setting up Environment Variables: SUCCESS
[xx.xx.xx.xx] - Invoking deployment process
[xx.xx.xx.xx] ✘ Invoking deployment process: FAILED
-----------------------------------STDERR-----------------------------------
ir=/root/.node-gyp/0.10.35',
gyp info spawn args '-Dmodule_root_dir=/opt/projectmanager/tmp/bundle/programs/server/npm/npm-bcrypt/node_modules/bcrypt',
gyp info spawn args '--depth=.',
gyp info spawn args '--no-parallel',
gyp info spawn args '--generator-output',
gyp info spawn args 'build',
gyp info spawn args '-Goutput_dir=.' ]
gyp info spawn make
gyp info spawn args [ 'BUILDTYPE=Release', '-C', 'build' ]
gyp info ok
npm WARN package.json meteor-dev-bundle#0.0.0 No description
npm WARN package.json meteor-dev-bundle#0.0.0 No repository field.
npm WARN package.json meteor-dev-bundle#0.0.0 No README data
stop: Unknown instance:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to localhost port 80: Connection refused
App did not pick up! Please check app logs.
-----------------------------------STDOUT-----------------------------------
LE(target) Release/obj.target/bcrypt_lib.node: Finished
COPY Release/bcrypt_lib.node
make: Leaving directory `/opt/projectmanager/tmp/bundle/programs/server/npm/npm-bcrypt/node_modules/bcrypt/build'
> fibers#1.0.1 install /opt/projectmanager/tmp/bundle/programs/server/node_modules/fibers
> node ./build.js
`linux-x64-v8-3.14` exists; testing
Binary is fine; exiting
underscore#1.5.2 node_modules/underscore
semver#4.1.0 node_modules/semver
chalk#0.5.1 node_modules/chalk
├── escape-string-regexp#1.0.2
├── ansi-styles#1.1.0
├── supports-color#0.2.0
├── strip-ansi#0.3.0 (ansi-regex#0.2.1)
└── has-ansi#0.1.0 (ansi-regex#0.2.1)
eachline#2.3.3 node_modules/eachline
└── type-of#2.0.1
source-map-support#0.2.8 node_modules/source-map-support
└── source-map#0.1.32 (amdefine#0.1.0)
fibers#1.0.1 node_modules/fibers
Waiting for MongoDB to initialize. (5 minutes)
connected
projectmanager start/running, process 11786
Waiting for 30 seconds while app is booting up
Checking is app booted or not?
----------------------------------------------------------------------------
Completed TaskList: Deploy app 'projectmanager' (linux)
The project works perfectly with no errors on my localhost.
Any ideas?
Thanks in advance, Alex Adams
And the solution is:
The mup logs showed that it didn't like a path designated for a folder to hold uploaded files. I changed the app to use GridFS, thus storing the files in the database. The app deploys correctly now.

When running SeverSpec test, `initialize': getaddrinfo: Name or service not known (SocketError) error is occurred

I am new to Serverspec testing tool.
When running test, I got the following error.
[root#ost-svr004 serverspec]# rake spec
/usr/bin/ruby -I/usr/lib/ruby/gems/2.1.0/gems/rspec-support-3.1.2/lib:/usr/lib/ruby/gems/2.1.0/gems/rspec-core-3.1.7/lib /usr/lib/ruby/gems/2.1.0/gems/rspec-core-3.1.7/exe/rspec --pattern spec/www.example.jp/\*_spec.rb
/usr/lib/ruby/gems/2.1.0/gems/net-ssh-2.9.1/lib/net/ssh/transport/session.rb:70:in `initialize': getaddrinfo: Name or service not known (SocketError)
from /usr/lib/ruby/gems/2.1.0/gems/net-ssh-2.9.1/lib/net/ssh/transport/session.rb:70:in `open'
from /usr/lib/ruby/gems/2.1.0/gems/net-ssh-2.9.1/lib/net/ssh/transport/session.rb:70:in `block in initialize'
from /usr/lib/ruby/2.1.0/timeout.rb:76:in `timeout'
from /usr/lib/ruby/2.1.0/timeout.rb:127:in `timeout'
from /usr/lib/ruby/gems/2.1.0/gems/net-ssh-2.9.1/lib/net/ssh/transport/session.rb:67:in `initialize'
from /usr/lib/ruby/gems/2.1.0/gems/net-ssh-2.9.1/lib/net/ssh.rb:202:in `new'
from /usr/lib/ruby/gems/2.1.0/gems/net-ssh-2.9.1/lib/net/ssh.rb:202:in `start'
.....
/usr/bin/ruby -I/usr/lib/ruby/gems/2.1.0/gems/rspec-support-3.1.2/lib:/usr/lib/ruby/gems/2.1.0/gems/rspec-core-3.1.7/lib /usr/lib/ruby/gems/2.1.0/gems/rspec-core-3.1.7/exe/rspec --pattern spec/www.example.jp/\*_spec.rb failed
During the Serverspec installation, I followed the instructions from http://serverspec.org/.
As the prerequisite, I also installed the "Developer Tools", Ruby and RubyGem.
In fact, when running the serverspec test for localhost Exec (local), it is okay and there is no problem occurred. But when running test for remote hosts, it is failed and this problem is occurred.
Finally , I got the solution from this link. http://www.firedaemon.com/blog/passwordless-root-ssh-public-key-authentication-on-centos-6 I created SSH RSA key on the client machine (ie. the one you are SSH'ing from) and copy it to remote host..

Resources