premake5 add generated files to vstudio project - premake

I have overridden the onProject function for the vs2012 action which generates some cpp files and then tries to include them in the project
--cant override the generateProject directly
--so have to override at the action level
premake.override( premake.action._list.vs2012, 'onProject', function(base, prj)
if premake.project.iscpp(prj) then
--generate files
--print( "Generating extra files ...")
local extraFiles = mine.getExtraFiles(prj)
for _,file in ipairs( extraFiles ) do
p.generate( file, nil, mine.generateExtraFile )
mine.addFileToSources(file)
end
end
--Generate regular stuff
base(prj)
end)
function mine.getExtraFiles(prj)
local extraFiles = {}
--works out what files to generate and add relevant info to table
return extraFiles
end
--this function is passed as a callback to premake.generate
function mine.generateExtraFile(extraFile)
--write contents of file
end
This is the function that attempts to add each generated file to the project
function mine.addFileToSources(extraFile)
local prj = extraFile.prj
local cfg = extraFile.cfg
local groups = premake.vstudio.vc2010.categorizeSources(prj)
local compiledFiles = groups.ClCompile or {}
--create a new file config for generated file
local filename = path.join(extraFile.location, extraFile.filename)
local fcfg = premake.fileconfig.new( filename, prj)
premake.fileconfig.addconfig(fcfg, cfg)
--add the config to the project's sources
table.insert(compiledFiles, fcfg)
compiledFiles[filename] = fcfg
--add to the projects source tree
--this bit is copied from premake.project.getsourcetree
-- The tree represents the logical source code tree to be displayed
-- in the IDE, not the physical organization of the file system. So
-- virtual paths are used when adding nodes.
-- If the project script specifies a virtual path for a file, disable
-- the logic that could trim out empty root nodes from that path. If
-- the script writer wants an empty root node they should get it.
local flags
if fcfg.vpath ~= fcfg.relpath then
flags = { trim = false }
end
-- Virtual paths can overlap, potentially putting files with the same
-- name in the same folder, even though they have different paths on
-- the underlying filesystem. The tree.add() call won't overwrite
-- existing nodes, so provide the extra logic here. Start by getting
-- the parent folder node, creating it if necessary.
local tr = premake.project.getsourcetree(prj)
local parent = premake.tree.add(tr, path.getdirectory(fcfg.vpath), flags)
local node = premake.tree.insert(parent, premake.tree.new(path.getname(fcfg.vpath)))
-- Pass through value fetches to the file configuration
setmetatable(node, { __index = fcfg })
end
For the most part - this all works:
The files are generated correctly and to correct location
The files are also included in the vcxproj file correctly
My problem is that the vcxproj.filters file is not being generated.
When I run premake I get this error:
Generating myproject.vcxproj.filters...Error: [string "src/actions/vstudio/vs2010_vcxproj_filters...."]:82: attempt to index field 'parent' (a nil value)
which corresponds to the function premake.vstudio.vc2010.filterGroup(prj, groups, group)
I get that the new fcfg I created needs to have a parent but I can't work out where or what I should be adding it to.
Can anyone help?
EDIT 1
I've got things working by adding this line to the end of function mine.addFileToSources(extraFile)
fcfg.parent = parent
This gives the file config a parent node so everything works out but I feel kinda dirty doing this so I'll look at following Citron's advice
EDIT 2
Overriding the bakefiles was much cleaner and neater. It wasn't as straightforward as Citron's code since I needed the information from the baked files in order to carry out my file generation but I am now confident that my code is correct and will possibly work with other exporters than vstudio too.
Here's my new code:
premake.override( premake.oven, 'bakeFiles', function(base, prj)
--bake the files as normal
local bakedFiles = base(prj)
if premake.project.iscpp(prj) then
--gather information about what files to generate and how
local extraFiles = mine.getExtraFiles(prj, bakedFiles)
for _,file in ipairs( extraFiles ) do
--do the generation
premake.generate( file, file.extension, mine.generateExtraFile )
--add the new files
local filename = premake.filename(file, file.extension)
table.insert(file.cfg.files, filename)
-- This should be the first time I've seen this file, start a new
-- file configuration for it. Track both by key for quick lookups
-- and indexed for ordered iteration.
assert( bakedFiles[filename] == nil )
local fcfg = premake.fileconfig.new(filename, file.prj)
bakedFiles[filename] = fcfg
table.insert(bakedFiles, fcfg)
premake.fileconfig.addconfig( bakedFiles[filename], file.cfg)
end
--sort the baked files again - since we have added to them
table.sort(bakedFiles, function(a,b)
return a.vpath < b.vpath
end)
end
return bakedFiles
end)

I don't know what the problem is with your code (a bit too much to read, and not enough time :p) but if you just want to add some generated files to your project tree, I would advise you to override premake.oven.bakeFiles instead.
This is what I used to add files generated by Qt in my addon. See premake.extensions.qt.customBakeFiles on https://github.com/dcourtois/premake-qt/blob/master/qt.lua
Basically in the bakeFiles override, you can just browse your projects, and insert files in the list easily. Then, if those added files need some custom configuration, you can then override premake.fileconfig.addconfig. See premake.extensions.qt.customAddFileConfig in the aforementioned addon.
In this addconfig override, you'll have access to the files and you will be able to modify their configuration object: you can add custom build rules, special options, etc.
It's not a direct answer to your specific question, but I hope it will help you achieve what you need.

Related

Change output directory in "sbt-native-packager"

Sorry, I'm new to sbt and the "sbt-native-packager". What I need to do is to map whole directories to the .zip file and change the output path.
This how I've done my mapping of the directory:
mappings in Universal <++= (packageBin in Compile, baseDirectory ) map { (_, baseDirectory) =>
val dir = baseDirectory / "migrations"
(dir.***) pair relativeTo(dir.getParentFile)
}
The mapping works perfectly fine, but I need to have a specific folder structure in the resulting .zip file.
In this example this directory is mapped to ".../target/stage/universal/migrations" but I need it to be mapped into a folder "db" like this: ".../target/stage/db/universal/migrations"
Many thanks in advance!
For mapping complete directories there are some MappingHelpers you can use. Your code can be simplified to
mappings in Universal ++= directory(baseDirectory.value / "migrations")
Regarding your second question, how to change the output folder. The question is not quite correct, as it should be: "how to change the destination path of a mapping". The universal packaging is a bit special as the target ouput looks like the resulting package.
Native packager uses mappings (sequence of File -> String tuples) that define a file and the corresponding output path in the resulting package. So if you want to change
# current
./target/stage/universal/migrations
# expected
./target/stage/db/universal/migrations
I assume you want the migrations in your zip file in a db folder like this
/ # zip root
bin/ # start scripts
db/ # migrations go here
conf/ # configuration files
lib/ # jars
In order to accomplish this you have to change the destination string. This would look something like this ( not tested ):
mappings in Universal ++= contentOf(baseDirectory.value / "migrations").map {
case (file, dest) => file -> s"db/$dest"
}
cheers,
Muki

Uploads not working properly NGINX + Passenger + Carrierwave + Carrierwave_backgrounder

I have a Rails 4.0.0 app setup with a model called episode which mounts a carrierwave uploader called file_uploader to upload mp3s. I got my app setup using carrierwave_backgrounder and resque to background the processing of the uploaded files which are saved to an sftp server using the carrierwave-ftp gem. On my local machine it works great. Also on my vps (CentOS 6) it works great when I just start up the app using rails s or even rails s -e production. However when I switch to nginx + passenger, it no longer works as expected.
The files are uploaded to the /public/uploads/tmp dir where they are supposed be stored temporarily, but they never get moved into the upload dir that I have specified and none of the other post-processing stuff gets done, like setting content type, removing cache dirs, setting file size and length, etc.
So, yesterday, I switched from using the carrierwave_backgrounder command save_in_background to process_in_background and now it works fine for files stored locally, however, when I switch to sftp storage using the carrierwave-ftp gem, the files get processed, i.e., they are transferred to my sftp server and the path is stored in my model, but then the job hangs in the Resque queue.
The relevant code that is not getting executed is:
process :set_content_type
process :save_content_type_duration_and_size_in_model
Does anyone have any idea why this would work fine using development mode and even production mode but not using nginx + passenger?
Here's all the relevant code below:
episode.rb:
class Episode < ActiveRecord::Base
require 'carrierwave/orm/activerecord'
# require 'mp3info'
mount_uploader :file, FileUploader
process_in_background :file
belongs_to :podcast
validates :name, :podcast, :file, presence: true
default_scope { order("created_at DESC") }
scope :most_recent, ->(max = 5) { limit(max) }
end
file_uploader.rb:
# encoding: utf-8
class FileUploader < CarrierWave::Uploader::Base
include CarrierWave::MimeTypes
include ::CarrierWave::Backgrounder::Delay
storage :sftp
# Override the directory where uploaded files will be stored.
# This is a sensible default for uploaders that are meant to be mounted:
def store_dir
"#{model.podcast.name.to_s.downcase.parameterize}"
end
before :store, :remember_cache_id
after :store, :delete_tmp_dir
# This is the relevant code that is not getting executed
process :set_content_type
process :save_content_type_duration_and_size_in_model
def save_content_type_duration_and_size_in_model
model.content_type = file.content_type if file.content_type
model.file_size = file.size
Mp3Info.open(model.file.current_path) do |media|
model.duration = media.length
end
end
# store! nil's the cache_id after it finishes so we need to remember it for deletion
def remember_cache_id(new_file)
#cache_id_was = cache_id
end
def delete_tmp_dir(new_file)
# make sure we don't delete other things accidentally by checking the name pattern
if #cache_id_was.present? && #cache_id_was =~ /\A[\d]{8}\-[\d]{4}\-[\d]+\-[\d]{4}\z/
FileUtils.rm_rf(File.join(root, cache_dir, #cache_id_was))
end
end
end
config/initializers/carrierwave_backgrounder.rb:
CarrierWave::Backgrounder.configure do |c|
c.backend :resque, queue: :carrierwave
end
config/initializers/carrierwave.rb:
CarrierWave.configure do |config|
config.sftp_host = "ftphost.com"
config.sftp_user = "ftp_user"
config.sftp_folder = "ftp_password"
config.sftp_url = "http://url.com"
config.sftp_options = {
:password => "ftp_password",
:port => 22
}
end
I'm starting Resque with the command: QUEUE=* bundle exec rake environment resque:work &
If you need more info, just ask. Any help would be greatly appreciated.
UPDATE: Well, oddly enough as is often the case, it is now magically working. Not sure what did the trick, so I'm afraid this won't be of any help to anyone else who stumbles on this page.
i have the same issue. My process blocks run in development (rails s) but not under apache2/passenger. It's not pretty, but the way i solved it was to move my process code into the after :cache callback. The process blocks are called between the after and before cache callbacks so this seemed reasonable to me.
Here's the super weird part: I don't mean to call the functions, i mean to copy the code out of your process blocks (or functions) and paste directly into your after_cache callback.
I know i'm doing something wrong to cause this situation but i cannot figure it out. Hope this helps you.
version :office_preview
# comment out the following since it does nothing under Passenger
#process :office_to_img
end
def office_to_img
this won't be called under passenger :(
end
after :cache, :after_cache
def after_cache(file)
#for some reason, calling it here doesn't do anything
#office_to_img
code copied&pasted here from office_to_img
end

Has there ever been a unix system call to create a link from an open file descriptor? [duplicate]

In Unix, it's possible to create a handle to an anonymous file by, e.g., creating and opening it with creat() and then removing the directory link with unlink() - leaving you with a file with an inode and storage but no possible way to re-open it. Such files are often used as temp files (and typically this is what tmpfile() returns to you).
My question: is there any way to re-attach a file like this back into the directory structure? If you could do this it means that you could e.g. implement file writes so that the file appears atomically and fully formed. This appeals to my compulsive neatness. ;)
When poking through the relevant system call functions I expected to find a version of link() called flink() (compare with chmod()/fchmod()) but, at least on Linux this doesn't exist.
Bonus points for telling me how to create the anonymous file without briefly exposing a filename in the disk's directory structure.
A patch for a proposed Linux flink() system call was submitted several years ago, but when Linus stated "there is no way in HELL we can do this securely without major other incursions", that pretty much ended the debate on whether to add this.
Update: As of Linux 3.11, it is now possible to create a file with no directory entry using open() with the new O_TMPFILE flag, and link it into the filesystem once it is fully formed using linkat() on /proc/self/fd/fd with the AT_SYMLINK_FOLLOW flag.
The following example is provided on the open() manual page:
char path[PATH_MAX];
fd = open("/path/to/dir", O_TMPFILE | O_RDWR, S_IRUSR | S_IWUSR);
/* File I/O on 'fd'... */
snprintf(path, PATH_MAX, "/proc/self/fd/%d", fd);
linkat(AT_FDCWD, path, AT_FDCWD, "/path/for/file", AT_SYMLINK_FOLLOW);
Note that linkat() will not allow open files to be re-attached after the last link is removed with unlink().
My question: is there any way to re-attach a file like this back into the directory structure? If you could do this it means that you could e.g. implement file writes so that the file appears atomically and fully formed. This appeals to the my compulsive neatness. ;)
If this is your only goal, you can achieve this in a much simpler and more widely used manner. If you are outputting to a.dat:
Open a.dat.part for write.
Write your data.
Rename a.dat.part to a.dat.
I can understand wanting to be neat, but unlinking a file and relinking it just to be "neat" is kind of silly.
This question on serverfault seems to indicate that this kind of re-linking is unsafe and not supported.
Thanks to #mark4o posting about linkat(2), see his answer for details.
I wanted to give it a try to see what actually happened when trying to actually link an anonymous file back into the filesystem it is stored on. (often /tmp, e.g. for video data that firefox is playing).
As of Linux 3.16, there still appears to be no way to undelete a deleted file that's still held open. Neither AT_SYMLINK_FOLLOW nor AT_EMPTY_PATH for linkat(2) do the trick for deleted files that used to have a name, even as root.
The only alternative is tail -c +1 -f /proc/19044/fd/1 > data.recov, which makes a separate copy, and you have to kill it manually when it's done.
Here's the perl wrapper I cooked up for testing. Use strace -eopen,linkat linkat.pl - </proc/.../fd/123 newname to verify that your system still can't undelete open files. (Same applies even with sudo). Obviously you should read code you find on the Internet before running it, or use a sandboxed account.
#!/usr/bin/perl -w
# 2015 Peter Cordes <peter#cordes.ca>
# public domain. If it breaks, you get to keep both pieces. Share and enjoy
# Linux-only linkat(2) wrapper (opens "." to get a directory FD for relative paths)
if ($#ARGV != 1) {
print "wrong number of args. Usage:\n";
print "linkat old new \t# will use AT_SYMLINK_FOLLOW\n";
print "linkat - <old new\t# to use the AT_EMPTY_PATH flag (requires root, and still doesn't re-link arbitrary files)\n";
exit(1);
}
# use POSIX qw(linkat AT_EMPTY_PATH AT_SYMLINK_FOLLOW); #nope, not even POSIX linkat is there
require 'syscall.ph';
use Errno;
# /usr/include/linux/fcntl.h
# #define AT_SYMLINK_NOFOLLOW 0x100 /* Do not follow symbolic links. */
# #define AT_SYMLINK_FOLLOW 0x400 /* Follow symbolic links. */
# #define AT_EMPTY_PATH 0x1000 /* Allow empty relative pathname */
unless (defined &AT_SYMLINK_NOFOLLOW) { sub AT_SYMLINK_NOFOLLOW() { 0x0100 } }
unless (defined &AT_SYMLINK_FOLLOW ) { sub AT_SYMLINK_FOLLOW () { 0x0400 } }
unless (defined &AT_EMPTY_PATH ) { sub AT_EMPTY_PATH () { 0x1000 } }
sub my_linkat ($$$$$) {
# tmp copies: perl doesn't know that the string args won't be modified.
my ($oldp, $newp, $flags) = ($_[1], $_[3], $_[4]);
return !syscall(&SYS_linkat, fileno($_[0]), $oldp, fileno($_[2]), $newp, $flags);
}
sub linkat_dotpaths ($$$) {
open(DOTFD, ".") or die "open . $!";
my $ret = my_linkat(DOTFD, $_[0], DOTFD, $_[1], $_[2]);
close DOTFD;
return $ret;
}
sub link_stdin ($) {
my ($newp, ) = #_;
open(DOTFD, ".") or die "open . $!";
my $ret = my_linkat(0, "", DOTFD, $newp, &AT_EMPTY_PATH);
close DOTFD;
return $ret;
}
sub linkat_follow_dotpaths ($$) {
return linkat_dotpaths($_[0], $_[1], &AT_SYMLINK_FOLLOW);
}
## main
my $oldp = $ARGV[0];
my $newp = $ARGV[1];
# link($oldp, $newp) or die "$!";
# my_linkat(fileno(DIRFD), $oldp, fileno(DIRFD), $newp, AT_SYMLINK_FOLLOW) or die "$!";
if ($oldp eq '-') {
print "linking stdin to '$newp'. You will get ENOENT without root (or CAP_DAC_READ_SEARCH). Even then doesn't work when links=0\n";
$ret = link_stdin( $newp );
} else {
$ret = linkat_follow_dotpaths($oldp, $newp);
}
# either way, you still can't re-link deleted files (tested Linux 3.16 and 4.2).
# print STDERR
die "error: linkat: $!.\n" . ($!{ENOENT} ? "ENOENT is the error you get when trying to re-link a deleted file\n" : '') unless $ret;
# if you want to see exactly what happened, run
# strace -eopen,linkat linkat.pl
Clearly, this is possible -- fsck does it, for example. However, fsck does it with major localized file system mojo and will clearly not be portable, nor executable as an unprivileged user. It's similar to the debugfs comment above.
Writing that flink(2) call would be an interesting exercise. As ijw points out, it would offer some advantages over current practice of temporary file renaming (rename, note, is guaranteed atomic).
Kind of late to the game but I just found http://computer-forensics.sans.org/blog/2009/01/27/recovering-open-but-unlinked-file-data which may answer the question. I haven't tested it, though, so YMMV. It looks sound.

Does sbt have something like gradle's processResources task with ReplaceTokens support?

We are moving into Scala/SBT from a Java/Gradle stack. Our gradle builds were leveraging a task called processResources and some Ant filter thing named ReplaceTokens to dynamically replace tokens in a checked-in .properties file without actually changing the .properties file (just changing the output). The gradle task looks like:
processResources {
def whoami = System.getProperty( 'user.name' );
def hostname = InetAddress.getLocalHost().getHostName()
def buildTimestamp = new Date().format('yyyy-MM-dd HH:mm:ss z')
filter ReplaceTokens, tokens: [
"buildsig.version" : project.version,
"buildsig.classifier" : project.classifier,
"buildsig.timestamp" : buildTimestamp,
"buildsig.user" : whoami,
"buildsig.system" : hostname,
"buildsig.tag" : buildTag
]
}
This task locates all the template files in the src/main/resources directory, performs the requisite substitutions and outputs the results at build/resources/main. In other words it transforms src/main/resources/buildsig.properties from...
buildsig.version=#buildsig.version#
buildsig.classifier=#buildsig.classifier#
buildsig.timestamp=#buildsig.timestamp#
buildsig.user=#buildsig.user#
buildsig.system=#buildsig.system#
buildsig.tag=#buildsig.tag#
...to build/resources/main/buildsig.properties...
buildsig.version=1.6.5
buildsig.classifier=RELEASE
buildsig.timestamp=2013-05-06 09:46:52 PDT
buildsig.user=jenkins
buildsig.system=bobk-mbp.local
buildsig.tag=dev
Which, ultimately, finds its way into the WAR file at WEB-INF/classes/buildsig.properties. This works like a champ to record build specific information in a Properties file which gets loaded from the classpath at runtime.
What do I do in SBT to get something like this done? I'm new to Scala / SBT so please forgive me if this seems a stupid question. At the end of the day what I need is a means of pulling some information from the environment on which I build and placing that information into a properties file that is classpath loadable at runtime. Any insights you can give to help me get this done are greatly appreciated.
The sbt-buildinfo is a good option. The README shows an example of how to define custom mappings and mappings that should run on each compile. In addition to the straightforward addition of normal settings like version shown there, you want a section like this:
buildInfoKeys ++= Seq[BuildInfoKey](
"hostname" -> java.net.InetAddress.getLocalHost().getHostName(),
"whoami" -> System.getProperty("user.name"),
BuildInfoKey.action("buildTimestamp") {
java.text.DateFormat.getDateTimeInstance.format(new java.util.Date())
}
)
Would the following be what you're looking for:
sbt-editsource: An SBT plugin for editing files
sbt-editsource is a text substitution plugin for SBT 0.11.x and
greater. In a way, it’s a poor man’s sed(1), for SBT. It provides the
ability to apply line-by-line substitutions to a source text file,
producing an edited output file. It supports two kinds of edits:
Variable substitution, where ${var} is replaced by a value. sed-like
regular expression substitution.
This is from Community Plugins.

pass permanent parameter to a jar file

I have 3 jars: jar1, jar2 and jar3, in the same path who can change in other pc (ex: c:\prova)
When I run jar1, it moves jar2 in the Windows Sturtup folder.
I want that jar2 simply activate jar3 at every windows startup, but of course it doesn't find jar3 who is remained in the first path.
So I want that jar1 pass a reference (in this case the path c:\prova) to the jar2, when moving it, or at least on the first call to it.
I find it difficoult because:
I can't write the path in a text file in jar2: text files in jars aren't writable.
I can't write the text file in the windows Startup folder: it will be opened at every win startup..
I can't pass the path as a parameter, it will be good for the first call but I can't store this value for the succesive calls.
Sorry for my bad english, thanks for any help!
To add the file Path.txt (with jar3's path) in jar2:
Runtime.getRuntime().exec("jar uf jar2.jar Path.txt");
To read the file in jar2 (Startup is my class name):
String s = "/Path.txt";
is = Startup.class.getResourceAsStream(s);
br = new BufferedReader(new InputStreamReader(is));
while (null != (line = br.readLine())) {
list.add(line);
}
Thank me!

Resources