Setting nosignal option for CURL command line? - unix

From the libcurl documentation, I understand NOSIGNAL has to be set to 1 when using multi-thread program.
However, if I am calling curl from a command line, I don't see a NOSIGNAL switch/option. How do I set nosignal when calling curl directly?

You can't: as you said, there is no such option documented by cURL help (and you can also verify that there is no reference to CURLOPT_NOSIGNAL within the command-line source code - see src/tool_operate.c).
Now the question is why you would need this, as asked by Celada. Using libcurl in a multi-threaded context makes sense (i.e. you may want to create/reuse curl handles from several threads) - and you should then follow the best practices, but I don't see the point with the CLI tool.

Related

Remap Shortcut in Atom

I am trying to remap the command of running my python source file, which is from the atom-python-run package to the shortcut cmd+r, which is currently used by the replacement function.
if I type:
'cmd-r': 'unbind!'
It says the command is not found. So I can unbind it.
Do I need to unbind the command or can I somehow assign the new command without doing all of that stuff.
I found this online to remap another command of another package, just as a scheme of how to remap.
'atom-workspace atom-text-editor:not([mini])':
'ctrl-j': 'unset!'
However I could not figure out how to rewrite that for my purpose. Is there a way to rewrite this for my purpose or is that something different?
Thanks for your time.
You don't need to unset the keymap, you can simple overwrite it by assigning the name of the run command to it:
'atom-workspace atom-text-editor:not([mini])':
'ctrl-r': 'Python run: run-f5'
The author of this made the command a bit hard to guess, since it's more common to use a slug of the command (e.g. python-run:run-f5).
You can get a full list of available commands by running atom.commands.registeredCommands in the console.

Parallel downloading a file using command line and lftp

I am looking into how to use lftp for parallel downloading a file over http .I see this example
lftp -c "pget -n 10 http://example.com/foo.bar"
However not finding any information on how to specify custom http headers and cookie values here. Any help would be appreciated.
Thanks!
For cookies there is the http:cookie setting. See the man page. Custom headers are not supported yet, but there are a few supported via http:* settings.

Is there a way to run "sys.list_modules" command without a minion?

As far as the documentation goes, if one would like to query for available execution modules, the following command should be used:
salt '<minion_name>' sys.list_modules
My question is,
- whether it possible to run the above without a minion?
(e.g salt sys.list_modules , isn't it cleaner?)
If you just want to know all the modules available to you, or the latest modules available for the current saltstack version.
salt-call --local sys.list_modules

Unix CouchDB installation + KERL

I am attempting to install CouchDB on my planetlab Unix machines from the source packages.
I installed Erlang r16b01 using Kerl: http://docs.basho.com/riak/1.3.0/tutorials/installation/Installing-Erlang/#Install-using-kerl
I installed openssl from the source package.
So, I ran "./configure --with-erlang=path/to/erlang/using/kerl" and I get the error
"configure: error: Could not find the Erlang crypto library"
This error indicates that Erlang was not compiled with OpenSSL support.
So, I tried using
"KERL_CONFIGURE_OPTIONS=--with-ssl=path/to/openssl/lib"
(Not sure if I'm using the above command correctly)
Then, reinstalled and reactivated Erlang.
This still brings up the same error.
I checked if Erlang if it could execute "crypto.start()", and it let me type the command, but it doesn't have a reply "ok" like in the documention: http://dennisreimann.de/blog/installing-couchdb-and-erlang-on-ubuntu-hardy/
Please help!
1) Did you first create a user couchdb and then do everything as that user? Including erlang build and install? That might be easier.
2) There is an error in your test, you need to terminate your command in the erl shell with a dot, otherwise you get no responce, like you already noticed. crypto:start(). is correct:
$ erl
Eshell V6.1 (abort with ^G)
1> crypto:start().
** exception error: undefined function crypto:start/0
After a successful build and installation it will respond ok:
$ erl
Eshell V6.1 (abort with ^G)
1> crypto:start().
ok
And you can also stop it afterwards:
2> crypto:stop().
ok
3>
=INFO REPORT==== 10-Aug-2014::20:22:06 ===
application: crypto
exited: stopped
type: temporary
3) You need the development package of OpenSSL including the header files as well as the binary command program openssl. At least version 0.9.8 of OpenSSL is required. As a sidenote for people who are on debian and ubuntu, it is usually enough to run:
sudo apt-get install openssl libssl-dev
In your case, you should somehow make sure that your openssl source install includes all the above (openssl binaries, header files).
4) Most probably it could be an issue with finding the libraries. I recommend reading this answer, which deals with a unix based system, and it could point you to the right direction:
https://stackoverflow.com/a/14776521/362951
Depending on the error message after the crypto:start(). you could try to somehow add the path and logout of the shell and relogin and then activate kerl and try again. No need to rebuild if it was present and found at compile time.
5) Your kerl configuration looks good. Again using debian/ubuntu paths a ~/.kerlrc could look like
KERL_CONFIGURE_OPTIONS="--with-ssl=/usr/lib/ssl"
And hopefully the ssl path you are inserting is the correct one.
You could also try and omit the path, maybe it finds the correct one by itself. On http://www.erlang.org/doc/installation_guide/INSTALL.html it looks like it is valid to do so:
KERL_CONFIGURE_OPTIONS="--with-ssl"
Currently kerl silently produces a build without crypto if it fails to find the headers https://github.com/yrashk/kerl/issues/31
6) I see you are giving the --with-erlang parameter to couchdb, does it point to the right directory? Or maybe it needs to go one level deeper or one level up.
Otherwise possibly an older, system. wide erlang could be used, if found.
Also I am not sure if the combination of a kerl environment and passing the erlang location using the --with-erlang parameter to couchdb works. I did not try the --with-erlang parameter with kerl, because I activate the kerl environment before compiling couchdb and then again before the couchdb start script.

tool for building software

I need something like make i.e. dependencies + executing shell commands where failing command stops make execution.
But more deeply integrated with shell i.e. now in make each line is executed in separate context so it is not easy to set variable in one line and use it in following line (I do not want escape char at end of line because it is not readable).
I want simple syntax (no XML) with control flow and functions (what is missing in make).
It does not have to have support for compilation. I have to just bind together several components built using autotools, package them, trigger test and publish results.
I looked at: make, ant, maven, scons, waf, nant, rake, cons, cmake, jam and they do not fit my needs.
take a look at doit
you can use shell commands or python functions to define tasks (builds).
very easy to use. write scripts in python. "no api" (you dont need to import anything in your script)
it has good support to track dependencies and targets
Have a look at fabricate.
If that does not fulfill your needs or if you would rather not write your build script in Python, you could also use a combination of shell scripting and fabricate. Write the script as you would to build your project manually, but prepend build calls with "fabricate.py" so build dependencies are managed automatically.
Simple example:
#!/bin/bash
EXE="myapp"
CC="fabricate.py gcc" # let fabricate handle dependencies
FILES="file1.c file2.c file3.c"
OBJS=""
# build link
for F in $FILES; do
echo $CC -c $F
if [ $? -ne 0 ]; then
echo "Build failed while compiling $F" >2
exit $?
fi
OBJS="$OBJS ${F/.c/.o}"
done
# link
$CC -o $EXE $OBJS
Given that you want control flow, functions, everything operating in the same environment and no XML, it sounds like you want to use the available shell script languages (sh/bash/ksh/zsh), or Perl (insert your own favourite scripting language here!).
I note you've not looked at a-a-p. I'm not familiar with this, other than it's a make system from the people who brought us vim. So you may want to look over that.
A mix of makefile and a scripting language to choose which makefile to run at a time could do it.
I have had the same needs. My current solution is to use makefiles to accurately represent the graph dependency (you have to read "Recursive make considered harmful"). Those makefiles trigger bash scripts that take makefiles variables as parameters. This way you have not to deal with the problem of shell context and you get a clear separation between the dependencies and the actions.
I'm currently considering waf as it seems well designed and fast enough.
You might want to look at SCons; it's a Make-replacement written in Python.

Resources