When reading some rpmbuild spec files, I come across some of the conditional macros which puzzle me.
example1
%if 0%{?rhel} > 7
blah blah
%endif
# I understand the above block tries to check if the
# red hat enterprise linux version is above 7, then blah blah
# But what is the usage of the '0'?
example 2
%if 0%{!?pkg_name:1}
%define pkg_name foo
%endif
# I understand the above block tries to check if the pkg_name
# is not defined, then define it with the value foo.
# But what is the usage of the '0'?
My guess is that '0' indicates the next expression to be either 'nil' or a number so that rpm would consider them as a number (such as 06, 0, or 01 in above examples) instead of a string or empty string. But I am not sure about it.
Unfortunatly, most of the online tutorial materials did not cover this topic.
You got it right; it's a safeguard. The %{?rhel} says "replace with the rhel macro if it exists and it is OK if it does not (the ?)."
So, if rpmbuild replaced it with nothing, the resulting if > 7 would barf.
Related
I have a *.w file, referring to two include files ({incl\include_file.i}, {incl\do_something_file.i}). That first include-file contains the definition of a RECID variable "recordid":
DEF INPUT-OUTPUT PARAMETER recordid AS RECID.
I am capable to compile the *.w file, the listing file looks as follows: (just a fragment)
Prompt>findstr "recordid do_something" listing.txt
...
1 x DEF INPUT-OUTPUT PARAMETER recordid AS RECID.
...
1 x 1 {incl\do_something_file.i
2 x 1 INPUT-OUTPUT recordid
So, the compilation works. In top of that, I've checked the pairs of "&ANALYZE-SUSPEND" and "&ANALYZE-RESUME" clauses and everything is fine.
Nevertheless, I can't open the *.w file, as the mentioned RECID seems not to be known (errors 201 and 196).
Edit after first comments
This the exact error message I get while opening the *.w file, using the AppBuilder (I'm working with a Dutch version of the tool, hence the Dutch words in between):
---------------------------
Fout
---------------------------
This file cannot be analyzed by the AppBuilder.
Please check these problems in your file or environment:
** Onbekende veld- of variabelenaam - recordid. (201)
** .\incl\<do_something_file>.i Compilatiefout op regel 7. (196)
---------------------------
OK
---------------------------
Edit with more information on ANALYZE- clauses
I've launched following findstr command on my code with the following results:
Prompt>findstr /I "ANALYZE-RESUME ANALYZE-SUSPEND" <filename>.w
&ANALYZE-SUSPEND _VERSION-NUMBER ... GUI
&ANALYZE-RESUME
&ANALYZE-SUSPEND _UIB-CODE-BLOCK _CUSTOM _DEFINITIONS ...
&ANALYZE-RESUME
...
I confirm that the number of &ANALYZE-SUSPEND clauses equals the number of &ANALYZE-RESUME clauses, they are in the right sequence (first a SUSPEND and then a RESUME) and none of them is commented out.
Does anybody have an idea what's going wrong?
The problem was caused by an include, being outside of an suspend resume clause, in order to solve such a situation the following command might be useful:
findstr /I "ANALYZE {incl" <source_file>.w
The result should look like the following:
...
&ANALYZE-SUSPEND _UIB-CODE-BLOCK _CONTROL C-Win
&ANALYZE-RESUME
&ANALYZE-SUSPEND _UIB-CODE-BLOCK _CUSTOM _MAIN-BLOCK C-Win
{incl\something.i}
{incl\something_else.i}
&ANALYZE-RESUME
...
You see following rules:
The number of suspends and resumes must be equal.
Every suspend is to be closed by a resume.
Not one of those can be commented out.
It is advised to have includes between the suspend and the resume.
All, I am running the following script to load the data on to the Oracle Server using unix box and sqlldr. Earlier it gave me an error saying sqlldr: command not found. I added "SQLPLUS < EOF", it still gives me an error for unexpected end of file syntax error on line 12 but it is only 11 line of code. What seems to be the problem according to you.
#!/bin/bash
FILES='ls *.txt'
CTL='/blah/blah1/blah2/name/filename.ctl'
for f in $FILES
do
cat $CTL | sed "s/:FILE/$f/g" >$f.ctl
sqlplus ID/'PASSWORD'#SERVERNAME << EOF sqlldr SCHEMA_NAME/SCHEMA_PASSWORD control=$f.ctl data=$f EOF
done
sqlplus will never know what to do with the command sqlldr. They are two complementary cmd-line utilities for interfacing with Oracle DB.
Note NO sqlplus or EOF etc required to load data into a schema:
#!/bin/bash
#you dont want this FILES='ls *.txt'
CTL_PATH=/blah/blah1/blah2/name/'
CTL_FILE="$CTL_PATH/filename.ctl"
SCHEMA_NM=SCHEMA_NAME
SCHEMA_PSWD=SCHEMA_PASSWORD
for f in *.txt
do
# don't need cat! cat $CTL | sed "s/:FILE/$f/g" >"$f".ctl
sed "s/:FILE/$f/g" "$CTL_FILE" > "$CTL_PATH/$f.ctl"
#myBad sqlldr "$SCHEMA_NAME/$SCHEMA_PASSWORD" control="$CTL_PATH/$f.ctl" data="$f"
sqlldr $SCHEMA_USER/$SCHEMA_PASSWORD#$SERVER_NAME control="$CTL_PATH/$f.ctl" data="$f" rows=10000 direct=true errors=999
done
Without getting too philosophical, using assignments like FILES=$(ls *.txt) is a bad habit to get into. By contrast, for f in *.txt will deal correctly for files with odd characters in them (like spaces or other syntax breaking values). BUT the other habit you do want to get into is to quote all variable references (like $f), with dbl-quotes : "$f", OK? ;-) This is the otherside of protection for files with spaces etc embedded in them.
In the edit update, I've varibalized your CTL_PATH and CTL_FILE. I think I understand your intent, that you have 1 std CTL_FILE that you pass thru sed to create a table specific .ctl file (a good approach in my experience). Note that you don't need to use cat to send a file to sed, but your use to create a altered file via redirection (> $f.ctl) is very shell-like too.
In 2nd edit update, I looked here on S.O. and found an example sqlldr cmdline that has the correct syntax and have modified to work with your variable names.
To finish up,
A. Are you sure the Oracle Client package is installed on the machine
that you are running your script on?
B. Is the /path/to/oracle/client/tools/bin included in your working
$PATH?
C. try which sqlldr. If you don't get anything, either its not
installed or its not in the path.
D. If not installed, you'll have to get it installed.
E. Once installed, note the directory that contains the sqlldr cmd.
find / -name 'sqlldr*' will take a long time to run, but it will
print out the path you want to use.
F. Take the "path" part of what is returned (like
/opt/oracle/11.2/client/bin/ (but not the sqlldr at the end), and
edit script at 2nd line with
(Txt added to appease the S.O. Formatter ;-) )
export ORCL_PATH="/path/you/found/to/oracle/client"
export PATH="$ORCL_PATH:$PATH"
These steps should solve any remaining issues. If this doesn't work, see if there is someone where you work that understands your local computing environment that can help explain any missing or different steps.
IHTH
Problem
I am developing an R package and I want to increase the version automatically each time I build it. I want that to be able to associate my results to package versions. For now I was using my own ugly function to do that.
My question is: is there a way to do it better? Or, should I avoid doing that in general?
Another option
Another option I was thinking of is to install my package (hosted in github) using ´devtools::install_github´ and then save with my results (or adding to plots) the GithubSHA1 that is saved in the installed DESCRIPTION file.
For example I can get the version and GithubSHA1 like that for the ´devtools´ package:
read.dcf(file=system.file("DESCRIPTION", package="devtools"),
fields=c("Version", "GithubSHA1"))
## Version GithubSHA1
## [1,] "1.5.0.99" "3ae58a2a2232240e67b898f875b8da5e57d1b3a8"
My tries so far
I wrote the following function to produce a new DESCRIPTION file, with updated version and date. (Increasing the major version is something I don't mind increasing per hand)
incVer <- function(pkg, folder=".", increase="patch"){
## Read DESCRIPTION from installed package ´pkg´ and make new one on the specified
## ´folder´. Two options for ´increase´ are "patch" and "minor"
f <- read.dcf(file=system.file("DESCRIPTION", package=pkg),
fields=c("Package", "Type", "Title", "Version", "Date",
"Author", "Maintainer", "Description", "License",
"Depends", "Imports", "Suggests"))
curVer <- package_version(f[4])
if(increase == "patch") {
curVer[[1,3]] <- ifelse(is.na(curVer$patchlevel), 1, curVer$patchlevel + 1)
} else if (increase == "minor") {
curVer[[1,2]] <- ifelse(is.na(curVer$minor), 1, curVer$minor + 1)
curVer[[1,3]] <- 0
} else {
stop(paste("Can not identify the increase argument: " , increase))
}
f[4] <- toString(curVer)
## Update also the date
f[5] <- format (Sys.time(), "%Y-%m-%d")
write.dcf(f, file=paste(folder, "DESCRIPTION", sep="/"))
}
If you are using git, then you can use git tags to create a version string. This is how we generate the version string of our igraph library:
git describe HEAD --tags | rev | sed 's/g-/./' | sed 's/-/+/' | rev
It gives you a format like this:
0.8.0-pre+131.ca78343
0.8.0-pre is the last tag on the current branch. (The last released version was 0.7.1, and we create a -pre tag immediately after the release tag.) 131 is the number of commits since the last tag. ca78343 is the first seven character of the hex id of the last commit.
This would be great, except that you cannot have version strings like this in R packages, R does not allow it. So for R we transform this version string using the following script: https://github.com/igraph/igraph/blob/develop/interfaces/R/tools/convertversion.sh
Essentially it creates a version number that is larger than the last released version and smaller than the next versions (the one in the -pre tag). From 0.8.0-pre+131.ca78343 it creates
0.7.999-131
where 131 is the number of commits since the last release.
I put the generation of the DESCRIPTION file in a Makefile. This replaces the date, and the version number:
VERSION=$(shell ./tools/convertversion.sh)
igraph/DESCRIPTION: src/DESCRIPTION version_number
sed 's/^Version: .*$$/Version: '$(VERSION)'/' $< | \
sed 's/^Date: .*$$/Date: '`date "+%Y-%m-%d"`'/' > $#
This is quite convenient, you don't need to do anything, except for adding the release tags and
the -pre tags.
Btw. this was mostly worked out by my friend and igraph co-developer, Tamás Nepusz, so the credit is his.
For a simpler approach, consider using the crant tool with the -u switch. For instance,
crant -u 3
will increment the third component of the version by one. There is also Git and SVN integration, and a bunch of other useful switches for roxygenizing, building, checking etc..
As auto-incrementing version numbering is not going to be built into the devtools package, I figured out a way based on Gabor's answer (the link to igraph in his answer is dead btw).
When I am about to commit to our repository, I run this bash script to set the date to today and to set the version number based on the latest tag, the .9000 suffix (as suggested here in the book R Packages by Hadley Wickham) and the number of commits within that tag:
echo "••••••••••••••••••••••••••••••••••••••••••••"
echo "• Updating package date and version number •"
echo "••••••••••••••••••••••••••••••••••••••••••••"
sed -i -- "s/^Date: .*/Date: $(date '+%Y-%m-%d')/" DESCRIPTION
# get latest tags
git pull --tags --quiet
current_tag=`git describe --tags --abbrev=0 | sed 's/v//'`
current_commit=`git describe --tags | sed 's/.*-\(.*\)-.*/\1/'`
# combine tag (e.g. 0.1.0) and commit number (like 40) increased by 9000 to indicate beta version
new_version="$current_tag.$((current_commit + 9000))" # results in 0.1.0.9040
sed -i -- "s/^Version: .*/Version: ${new_version}/" DESCRIPTION
echo "First 3 lines of DESCRIPTION:"
head -3 DESCRIPTION
echo
# ... after here more commands like devtools::document() and git commit
To be clear - this script actually makes these changes to the DESCRIPTION file.
EDIT: support for hundreds - now just increases the commit sequence number by 9000. So commit #120 in tag v0.6.1 leads to 0.6.1.9120.
I'm getting a really weird bug on one of my zsh installations. I can do this:
for k in {1..6}; do echo $k; done
# 1
# 2
# 3
# 4
# 5
# 6
but I can't step through it:
for k in {1..6..2}; do echo $k; done
# {1..6..2}
I'm sure my current shell is zsh, and in a different computer it works, so I'm just wondering what option I might have set that changed the default behavior. Any ideas?
While the {x..y} syntax originated in zsh decades ago, ksh93 was the one adding the {x..y..step} one and zsh only added it in version 4.3.10-test-3 in 2010.
You probably have an older version of zsh there.
ISO/IEC 2022 defines the C0 and C1 control codes. The C0 set are the familiar codes between 0x00 and 0x1f in ASCII, ISO-8859-1 and UTF-8 (eg. ESC, CR, LF).
Some VT100 terminal emulators (eg. screen(1), PuTTY) support the C1 set, too. These are the values between 0x80 and 0x9f (so, for example, 0x84 moves the cursor down a line).
I am displaying user-supplied input. I do not wish the user input to be able to alter the terminal state (eg. move the cursor). I am currently filtering out the character codes in the C0 set; however I would like to conditionally filter out the C1 set too, if terminal will interpret them as control codes.
Is there a way of getting this information from a database like termcap?
The only way to do it that I can think of is using C1 requests and testing the return value:
$ echo `echo -en "\x9bc"`
^[[?1;2c
$ echo `echo -e "\x9b5n"`
^[[0n
$ echo `echo -e "\x9b6n"`
^[[39;1R
$ echo `echo -e "\x9b0x" `
^[[2;1;1;112;112;1;0x
The above ones are:
CSI c Primary DA; request Device Attributes
CSI 5 n DSR; Device Status Report
CSI 6 n CPR; Cursor Position Report
CSI 0 x DECREQTPARM; Request Terminal Parameters
The terminfo/termcap that ESR maintains (link) has a couple of these requests in user strings 7 and 9 (user7/u7, user9/u9):
# INTERPRETATION OF USER CAPABILITIES
#
# The System V Release 4 and XPG4 terminfo format defines ten string
# capabilities for use by applications, .... In this file, we use
# certain of these capabilities to describe functions which are not covered
# by terminfo. The mapping is as follows:
#
# u9 terminal enquire string (equiv. to ANSI/ECMA-48 DA)
# u8 terminal answerback description
# u7 cursor position request (equiv. to VT100/ANSI/ECMA-48 DSR 6)
# u6 cursor position report (equiv. to ANSI/ECMA-48 CPR)
#
# The terminal enquire string should elicit an answerback response
# from the terminal. Common values for will be ^E (on older ASCII
# terminals) or \E[c (on newer VT100/ANSI/ECMA-48-compatible terminals).
#
# The cursor position request () string should elicit a cursor position
# report. A typical value (for VT100 terminals) is \E[6n.
#
# The terminal answerback description (u8) must consist of an expected
# answerback string. The string may contain the following scanf(3)-like
# escapes:
#
# %c Accept any character
# %[...] Accept any number of characters in the given set
#
# The cursor position report () string must contain two scanf(3)-style
# %d format elements. The first of these must correspond to the Y coordinate
# and the second to the %d. If the string contains the sequence %i, it is
# taken as an instruction to decrement each value after reading it (this is
# the inverse sense from the cup string). The typical CPR value is
# \E[%i%d;%dR (on VT100/ANSI/ECMA-48-compatible terminals).
#
# These capabilities are used by tac(1m), the terminfo action checker
# (distributed with ncurses 5.0).
Example:
$ echo `tput u7`
^[[39;1R
$ echo `tput u9`
^[[?1;2c
Of course, if you only want to prevent display corruption, you can use less approach, and let the user switch between displaying/not displaying control characters (-r and -R options in less). Also, if you know your output charset, ISO-8859 charsets have the C1 range reserved for control codes (so they have no printable chars in that range).
Actually, PuTTY does not appear to support C1 controls.
The usual way of testing this feature is with vttest, which provides menu entries for changing the input- and output- separately to use 8-bit controls. PuTTY fails the sanity-check for each of those menu entries, and if the check is disabled, the result confirms that PuTTY does not honor those controls.
I don't think there's a straightforward way to query whether the terminal supports them. You can try nasty hacky workarounds (like print them and then query the cursor position) but I really don't recommend anything along these lines.
I think you could just filter out these C1 codes unconditionally. Unicode declares the U+0080.. U+009F range as control characters anyway, I don't think you should ever use them for anything different.
(Note: you used the example 0x84 for cursor down. It's in fact U+0084 encoded in whichever encoding the terminal uses, e.g. 0xC2 0x84 for UTF-8.)
Doing it 100% automatically is challenging at best. Many, if not most, Unix interfaces are smart (xterms and whatnot), but you don't actually know if connected to an ASR33 or a PC running MSDOS.
You could try some of the terminal interrogation escape sequences and timeout if there is no reply. But then you might have to fall back and maybe ask the user what kind of terminal they are using.