Uploads not working properly NGINX + Passenger + Carrierwave + Carrierwave_backgrounder - nginx

I have a Rails 4.0.0 app setup with a model called episode which mounts a carrierwave uploader called file_uploader to upload mp3s. I got my app setup using carrierwave_backgrounder and resque to background the processing of the uploaded files which are saved to an sftp server using the carrierwave-ftp gem. On my local machine it works great. Also on my vps (CentOS 6) it works great when I just start up the app using rails s or even rails s -e production. However when I switch to nginx + passenger, it no longer works as expected.
The files are uploaded to the /public/uploads/tmp dir where they are supposed be stored temporarily, but they never get moved into the upload dir that I have specified and none of the other post-processing stuff gets done, like setting content type, removing cache dirs, setting file size and length, etc.
So, yesterday, I switched from using the carrierwave_backgrounder command save_in_background to process_in_background and now it works fine for files stored locally, however, when I switch to sftp storage using the carrierwave-ftp gem, the files get processed, i.e., they are transferred to my sftp server and the path is stored in my model, but then the job hangs in the Resque queue.
The relevant code that is not getting executed is:
process :set_content_type
process :save_content_type_duration_and_size_in_model
Does anyone have any idea why this would work fine using development mode and even production mode but not using nginx + passenger?
Here's all the relevant code below:
episode.rb:
class Episode < ActiveRecord::Base
require 'carrierwave/orm/activerecord'
# require 'mp3info'
mount_uploader :file, FileUploader
process_in_background :file
belongs_to :podcast
validates :name, :podcast, :file, presence: true
default_scope { order("created_at DESC") }
scope :most_recent, ->(max = 5) { limit(max) }
end
file_uploader.rb:
# encoding: utf-8
class FileUploader < CarrierWave::Uploader::Base
include CarrierWave::MimeTypes
include ::CarrierWave::Backgrounder::Delay
storage :sftp
# Override the directory where uploaded files will be stored.
# This is a sensible default for uploaders that are meant to be mounted:
def store_dir
"#{model.podcast.name.to_s.downcase.parameterize}"
end
before :store, :remember_cache_id
after :store, :delete_tmp_dir
# This is the relevant code that is not getting executed
process :set_content_type
process :save_content_type_duration_and_size_in_model
def save_content_type_duration_and_size_in_model
model.content_type = file.content_type if file.content_type
model.file_size = file.size
Mp3Info.open(model.file.current_path) do |media|
model.duration = media.length
end
end
# store! nil's the cache_id after it finishes so we need to remember it for deletion
def remember_cache_id(new_file)
#cache_id_was = cache_id
end
def delete_tmp_dir(new_file)
# make sure we don't delete other things accidentally by checking the name pattern
if #cache_id_was.present? && #cache_id_was =~ /\A[\d]{8}\-[\d]{4}\-[\d]+\-[\d]{4}\z/
FileUtils.rm_rf(File.join(root, cache_dir, #cache_id_was))
end
end
end
config/initializers/carrierwave_backgrounder.rb:
CarrierWave::Backgrounder.configure do |c|
c.backend :resque, queue: :carrierwave
end
config/initializers/carrierwave.rb:
CarrierWave.configure do |config|
config.sftp_host = "ftphost.com"
config.sftp_user = "ftp_user"
config.sftp_folder = "ftp_password"
config.sftp_url = "http://url.com"
config.sftp_options = {
:password => "ftp_password",
:port => 22
}
end
I'm starting Resque with the command: QUEUE=* bundle exec rake environment resque:work &
If you need more info, just ask. Any help would be greatly appreciated.
UPDATE: Well, oddly enough as is often the case, it is now magically working. Not sure what did the trick, so I'm afraid this won't be of any help to anyone else who stumbles on this page.

i have the same issue. My process blocks run in development (rails s) but not under apache2/passenger. It's not pretty, but the way i solved it was to move my process code into the after :cache callback. The process blocks are called between the after and before cache callbacks so this seemed reasonable to me.
Here's the super weird part: I don't mean to call the functions, i mean to copy the code out of your process blocks (or functions) and paste directly into your after_cache callback.
I know i'm doing something wrong to cause this situation but i cannot figure it out. Hope this helps you.
version :office_preview
# comment out the following since it does nothing under Passenger
#process :office_to_img
end
def office_to_img
this won't be called under passenger :(
end
after :cache, :after_cache
def after_cache(file)
#for some reason, calling it here doesn't do anything
#office_to_img
code copied&pasted here from office_to_img
end

Related

Issue in executing a batch file using PeopleCode in Application engine program

I want to execute a batch file using People code in Application Engine Program. But The program have an issue returning Exec code as a non zero value (Value - 1).
Below is people code snippet below.
Global File &FileLog;
Global string &LogFileName, &Servername, &commandline;
Local string &Footer;
If &Servername = "PSNT" Then
&ScriptName = "D: && D:\psoft\PT854\appserv\prcs\RNBatchFile.bat";
End-If;
&commandline = &ScriptName;
/* Need to commit work or Exec will fail */
CommitWork();
&ExitCode = Exec("cmd.exe /c " | &commandline, %Exec_Synchronous + %FilePath_Absolute);
If &ExitCode <> 0 Then
MessageBox(0, "", 0, 0, ("Batch File Call Failed! Exit code returned by script was " | &ExitCode));
End-If;
Any help how to resolve this issue.
Best bet is to do a trace of the execution.
Thoughts:
Can you log on the the process scheduler you are running this on and execute the script OK?
Is the AE being scheduled or called at run-time?
You should not need to change directory as you are using a fully qualified path to the script.
you should not need to call "cmd /c" as this will create an additional shell for you application to run within, making debuging harder, etc.
Run a trace, and drop us the output. :) HTH
What about changing the working directory to D: inside of the script instead? You are invoking two commands and I'm wondering what the shell is returning to exec. I'm assuming you wrote your script to give the appropriate return code and that isn't the problem.
I couldn't tell from the question text, but are you looking for a negative result, such as -1? I think return codes are usually positive. 0 for success, some other positive number for failure. Negative numbers may be acceptable, but am wondering if Exec doesn't like negative numbers?
Perhaps the PeopleCode ChDir function still works as an alternative to two commands in one line? I haven't tried it for a LONG time.
Another alternative that gives you significant control over the process is to use java.lang.Runtime.exec from PeopleCode: http://jjmpsj.blogspot.com/2010/02/exec-processes-while-controlling-stdin.html.

Debugging bitbake pkg_postinst_${PN}: Append to config-file installed by other recipe

I'm writing am openembedded/bitbake recipe for openembedded-classic. My recipe RDEPENDS on keyutils, and everything seems to work, except one thing:
I want to append a single line to the /etc/request-key.conf file installed by the keyutils package. So I added the following to my recipe:
pkg_postinst_${PN} () {
echo 'create ... more stuff ..' >> ${sysconfdir}/request-key.conf
}
However, the intended added line is missing in my resulting image.
My recipe inherits update-rc.d if that makes any difference.
My main question is: How do i debug this? Currently I am constructing an entire rootfs image, and then poke-around in that to see, if the changes show up. Surely there is a better way?
UPDATE:
Changed recipe to:
pkg_postinst_${PN} () {
echo 'create ... more stuff ...' >> ${D}${sysconfdir}/request-key.conf
}
But still no luck.
As far as I know, postinst runs at rootfs creation, and only at first boot if rootfs fails.
So there is a easy way to execute something only first boot. Just check for $D, like this:
pkg_postinst_stuff() {
#!/bin/sh -e
if [ x"$D" = "x" ]; then
# do something at first boot here
else
exit 1
fi
}
postinst scripts are ran at roots time, so ${sysconfdir} is /etc on your host. Use $D${sysconfdir} to write to the file inside the rootfs being generated.
OE-Classic is pretty ancient so you really should update to oe-core.
That said, Do postinst's run at first boot? I'm not sure. Also look in the recipes work directory in the temp directory and read the log and run files to see if there are any clues there.
One more thing. If foo RDEPENDS on bar that just means "when foo is installed, bar is also installed". I'm not sure it makes assertions about what is installed during the install phase, when your postinst is running.
If using $D doesn't fix the problem try editing your postinst to copy the existing file you're trying to edit somewhere else, so you can verify that it exists in the first place. Its possible that you're appending to a file that doesn't exist yet, and the the package that installs the file replaces it.

premake5 add generated files to vstudio project

I have overridden the onProject function for the vs2012 action which generates some cpp files and then tries to include them in the project
--cant override the generateProject directly
--so have to override at the action level
premake.override( premake.action._list.vs2012, 'onProject', function(base, prj)
if premake.project.iscpp(prj) then
--generate files
--print( "Generating extra files ...")
local extraFiles = mine.getExtraFiles(prj)
for _,file in ipairs( extraFiles ) do
p.generate( file, nil, mine.generateExtraFile )
mine.addFileToSources(file)
end
end
--Generate regular stuff
base(prj)
end)
function mine.getExtraFiles(prj)
local extraFiles = {}
--works out what files to generate and add relevant info to table
return extraFiles
end
--this function is passed as a callback to premake.generate
function mine.generateExtraFile(extraFile)
--write contents of file
end
This is the function that attempts to add each generated file to the project
function mine.addFileToSources(extraFile)
local prj = extraFile.prj
local cfg = extraFile.cfg
local groups = premake.vstudio.vc2010.categorizeSources(prj)
local compiledFiles = groups.ClCompile or {}
--create a new file config for generated file
local filename = path.join(extraFile.location, extraFile.filename)
local fcfg = premake.fileconfig.new( filename, prj)
premake.fileconfig.addconfig(fcfg, cfg)
--add the config to the project's sources
table.insert(compiledFiles, fcfg)
compiledFiles[filename] = fcfg
--add to the projects source tree
--this bit is copied from premake.project.getsourcetree
-- The tree represents the logical source code tree to be displayed
-- in the IDE, not the physical organization of the file system. So
-- virtual paths are used when adding nodes.
-- If the project script specifies a virtual path for a file, disable
-- the logic that could trim out empty root nodes from that path. If
-- the script writer wants an empty root node they should get it.
local flags
if fcfg.vpath ~= fcfg.relpath then
flags = { trim = false }
end
-- Virtual paths can overlap, potentially putting files with the same
-- name in the same folder, even though they have different paths on
-- the underlying filesystem. The tree.add() call won't overwrite
-- existing nodes, so provide the extra logic here. Start by getting
-- the parent folder node, creating it if necessary.
local tr = premake.project.getsourcetree(prj)
local parent = premake.tree.add(tr, path.getdirectory(fcfg.vpath), flags)
local node = premake.tree.insert(parent, premake.tree.new(path.getname(fcfg.vpath)))
-- Pass through value fetches to the file configuration
setmetatable(node, { __index = fcfg })
end
For the most part - this all works:
The files are generated correctly and to correct location
The files are also included in the vcxproj file correctly
My problem is that the vcxproj.filters file is not being generated.
When I run premake I get this error:
Generating myproject.vcxproj.filters...Error: [string "src/actions/vstudio/vs2010_vcxproj_filters...."]:82: attempt to index field 'parent' (a nil value)
which corresponds to the function premake.vstudio.vc2010.filterGroup(prj, groups, group)
I get that the new fcfg I created needs to have a parent but I can't work out where or what I should be adding it to.
Can anyone help?
EDIT 1
I've got things working by adding this line to the end of function mine.addFileToSources(extraFile)
fcfg.parent = parent
This gives the file config a parent node so everything works out but I feel kinda dirty doing this so I'll look at following Citron's advice
EDIT 2
Overriding the bakefiles was much cleaner and neater. It wasn't as straightforward as Citron's code since I needed the information from the baked files in order to carry out my file generation but I am now confident that my code is correct and will possibly work with other exporters than vstudio too.
Here's my new code:
premake.override( premake.oven, 'bakeFiles', function(base, prj)
--bake the files as normal
local bakedFiles = base(prj)
if premake.project.iscpp(prj) then
--gather information about what files to generate and how
local extraFiles = mine.getExtraFiles(prj, bakedFiles)
for _,file in ipairs( extraFiles ) do
--do the generation
premake.generate( file, file.extension, mine.generateExtraFile )
--add the new files
local filename = premake.filename(file, file.extension)
table.insert(file.cfg.files, filename)
-- This should be the first time I've seen this file, start a new
-- file configuration for it. Track both by key for quick lookups
-- and indexed for ordered iteration.
assert( bakedFiles[filename] == nil )
local fcfg = premake.fileconfig.new(filename, file.prj)
bakedFiles[filename] = fcfg
table.insert(bakedFiles, fcfg)
premake.fileconfig.addconfig( bakedFiles[filename], file.cfg)
end
--sort the baked files again - since we have added to them
table.sort(bakedFiles, function(a,b)
return a.vpath < b.vpath
end)
end
return bakedFiles
end)
I don't know what the problem is with your code (a bit too much to read, and not enough time :p) but if you just want to add some generated files to your project tree, I would advise you to override premake.oven.bakeFiles instead.
This is what I used to add files generated by Qt in my addon. See premake.extensions.qt.customBakeFiles on https://github.com/dcourtois/premake-qt/blob/master/qt.lua
Basically in the bakeFiles override, you can just browse your projects, and insert files in the list easily. Then, if those added files need some custom configuration, you can then override premake.fileconfig.addconfig. See premake.extensions.qt.customAddFileConfig in the aforementioned addon.
In this addconfig override, you'll have access to the files and you will be able to modify their configuration object: you can add custom build rules, special options, etc.
It's not a direct answer to your specific question, but I hope it will help you achieve what you need.

Has there ever been a unix system call to create a link from an open file descriptor? [duplicate]

In Unix, it's possible to create a handle to an anonymous file by, e.g., creating and opening it with creat() and then removing the directory link with unlink() - leaving you with a file with an inode and storage but no possible way to re-open it. Such files are often used as temp files (and typically this is what tmpfile() returns to you).
My question: is there any way to re-attach a file like this back into the directory structure? If you could do this it means that you could e.g. implement file writes so that the file appears atomically and fully formed. This appeals to my compulsive neatness. ;)
When poking through the relevant system call functions I expected to find a version of link() called flink() (compare with chmod()/fchmod()) but, at least on Linux this doesn't exist.
Bonus points for telling me how to create the anonymous file without briefly exposing a filename in the disk's directory structure.
A patch for a proposed Linux flink() system call was submitted several years ago, but when Linus stated "there is no way in HELL we can do this securely without major other incursions", that pretty much ended the debate on whether to add this.
Update: As of Linux 3.11, it is now possible to create a file with no directory entry using open() with the new O_TMPFILE flag, and link it into the filesystem once it is fully formed using linkat() on /proc/self/fd/fd with the AT_SYMLINK_FOLLOW flag.
The following example is provided on the open() manual page:
char path[PATH_MAX];
fd = open("/path/to/dir", O_TMPFILE | O_RDWR, S_IRUSR | S_IWUSR);
/* File I/O on 'fd'... */
snprintf(path, PATH_MAX, "/proc/self/fd/%d", fd);
linkat(AT_FDCWD, path, AT_FDCWD, "/path/for/file", AT_SYMLINK_FOLLOW);
Note that linkat() will not allow open files to be re-attached after the last link is removed with unlink().
My question: is there any way to re-attach a file like this back into the directory structure? If you could do this it means that you could e.g. implement file writes so that the file appears atomically and fully formed. This appeals to the my compulsive neatness. ;)
If this is your only goal, you can achieve this in a much simpler and more widely used manner. If you are outputting to a.dat:
Open a.dat.part for write.
Write your data.
Rename a.dat.part to a.dat.
I can understand wanting to be neat, but unlinking a file and relinking it just to be "neat" is kind of silly.
This question on serverfault seems to indicate that this kind of re-linking is unsafe and not supported.
Thanks to #mark4o posting about linkat(2), see his answer for details.
I wanted to give it a try to see what actually happened when trying to actually link an anonymous file back into the filesystem it is stored on. (often /tmp, e.g. for video data that firefox is playing).
As of Linux 3.16, there still appears to be no way to undelete a deleted file that's still held open. Neither AT_SYMLINK_FOLLOW nor AT_EMPTY_PATH for linkat(2) do the trick for deleted files that used to have a name, even as root.
The only alternative is tail -c +1 -f /proc/19044/fd/1 > data.recov, which makes a separate copy, and you have to kill it manually when it's done.
Here's the perl wrapper I cooked up for testing. Use strace -eopen,linkat linkat.pl - </proc/.../fd/123 newname to verify that your system still can't undelete open files. (Same applies even with sudo). Obviously you should read code you find on the Internet before running it, or use a sandboxed account.
#!/usr/bin/perl -w
# 2015 Peter Cordes <peter#cordes.ca>
# public domain. If it breaks, you get to keep both pieces. Share and enjoy
# Linux-only linkat(2) wrapper (opens "." to get a directory FD for relative paths)
if ($#ARGV != 1) {
print "wrong number of args. Usage:\n";
print "linkat old new \t# will use AT_SYMLINK_FOLLOW\n";
print "linkat - <old new\t# to use the AT_EMPTY_PATH flag (requires root, and still doesn't re-link arbitrary files)\n";
exit(1);
}
# use POSIX qw(linkat AT_EMPTY_PATH AT_SYMLINK_FOLLOW); #nope, not even POSIX linkat is there
require 'syscall.ph';
use Errno;
# /usr/include/linux/fcntl.h
# #define AT_SYMLINK_NOFOLLOW 0x100 /* Do not follow symbolic links. */
# #define AT_SYMLINK_FOLLOW 0x400 /* Follow symbolic links. */
# #define AT_EMPTY_PATH 0x1000 /* Allow empty relative pathname */
unless (defined &AT_SYMLINK_NOFOLLOW) { sub AT_SYMLINK_NOFOLLOW() { 0x0100 } }
unless (defined &AT_SYMLINK_FOLLOW ) { sub AT_SYMLINK_FOLLOW () { 0x0400 } }
unless (defined &AT_EMPTY_PATH ) { sub AT_EMPTY_PATH () { 0x1000 } }
sub my_linkat ($$$$$) {
# tmp copies: perl doesn't know that the string args won't be modified.
my ($oldp, $newp, $flags) = ($_[1], $_[3], $_[4]);
return !syscall(&SYS_linkat, fileno($_[0]), $oldp, fileno($_[2]), $newp, $flags);
}
sub linkat_dotpaths ($$$) {
open(DOTFD, ".") or die "open . $!";
my $ret = my_linkat(DOTFD, $_[0], DOTFD, $_[1], $_[2]);
close DOTFD;
return $ret;
}
sub link_stdin ($) {
my ($newp, ) = #_;
open(DOTFD, ".") or die "open . $!";
my $ret = my_linkat(0, "", DOTFD, $newp, &AT_EMPTY_PATH);
close DOTFD;
return $ret;
}
sub linkat_follow_dotpaths ($$) {
return linkat_dotpaths($_[0], $_[1], &AT_SYMLINK_FOLLOW);
}
## main
my $oldp = $ARGV[0];
my $newp = $ARGV[1];
# link($oldp, $newp) or die "$!";
# my_linkat(fileno(DIRFD), $oldp, fileno(DIRFD), $newp, AT_SYMLINK_FOLLOW) or die "$!";
if ($oldp eq '-') {
print "linking stdin to '$newp'. You will get ENOENT without root (or CAP_DAC_READ_SEARCH). Even then doesn't work when links=0\n";
$ret = link_stdin( $newp );
} else {
$ret = linkat_follow_dotpaths($oldp, $newp);
}
# either way, you still can't re-link deleted files (tested Linux 3.16 and 4.2).
# print STDERR
die "error: linkat: $!.\n" . ($!{ENOENT} ? "ENOENT is the error you get when trying to re-link a deleted file\n" : '') unless $ret;
# if you want to see exactly what happened, run
# strace -eopen,linkat linkat.pl
Clearly, this is possible -- fsck does it, for example. However, fsck does it with major localized file system mojo and will clearly not be portable, nor executable as an unprivileged user. It's similar to the debugfs comment above.
Writing that flink(2) call would be an interesting exercise. As ijw points out, it would offer some advantages over current practice of temporary file renaming (rename, note, is guaranteed atomic).
Kind of late to the game but I just found http://computer-forensics.sans.org/blog/2009/01/27/recovering-open-but-unlinked-file-data which may answer the question. I haven't tested it, though, so YMMV. It looks sound.

Automake: how to handle global and local 'make check' effectively?

In a larger project, I have set up ./tests/Makefile.am to run a number of tests when I call make check. The file global_wrapper.c contains the setup / breakdown code, and it calls test functions implemented in several subdirectories.
TESTS = global_test
check_PROGRAMS = global_test
global_test_SOURCES = global_wrapper.c foo/foo_test.c bar/bar_test.c
Works great. But the tests take a long time, so I would like to be able to optionally execute only tests from a single subdir. This is how I did it at first.
I added the subdirectories:
SUBDIRS = foo bar
In the subdirectories, I added local wrappers and Makefile.am's:
TESTS = foo_test
check_PROGRAMS = foo_test
# the foo_test.c here is of course the same as in the global Makefile.am
foo_test_SOURCES = foo_wrapper.c foo_test.c
This, too, works great - when I call make check in the subdirectory foo, only the foo tests are executed.
However, when I now call make check in ./tests, all tests are executed twice. Once through global_test, and once through the local test programs.
If I omit the SUBDIRS statement in the global Makefile.am, the subdirectory makefiles don't get build. If I omit TESTS from the local Makefile.am's, make check doesn't do anything for the local directories.
I'm not that familiar with automake, but I am pretty sure there is some way to solve this dilemma. Can anybody here give me a hint?
Break your tests up. In your tests/Makefile.am do:
TESTS = foo_test bar_test
and build foo_test bar_test appropriately with something like
foo_test_SOURCES = foo/foo_wrapper.c foo/foo_test.c
bar_test_SOURCES = bar/bar_wrapper.c bar/bar_test.c
Now, if you do a raw 'make check', both tests will be run. If you only want to run one test, you can do that with 'make check TESTS=foo_test' or 'make check TESTS=bar_test' and only the appropriate test will run. Typically, the Makefile.am lists all the tests that will be run by default in TESTS and the user selects alternate tests at make-time. Naturally, if you are running the tests a lot, you can 'export TESTS=foo_test' in your shell session and then only type 'make check'.
Can't you remove from "global_test" any test that is already executed in a subdirectory? (Just so they simply don't get executed twice.)
I think you could maybe overwrite the check rule at the top-level to define an environment variable:
check:
DISABLE_SUBTESTS=1 make check-recursive
and then test DISABLE_SUBTESTS in your sub-directories to decide whether to actually run the tests or not.
(Personally, I'd rather arrange to work in the existing make check framework by concealing the output of my tests, rather than overwriting the produced rules like this.)

Resources