I'm trying to work out my build process for RPMs.
When I produce source RPMs, it's including %{dist} in the file name. I would prefer that it only do that for the binary RPMs, since the source RPMs are not distribution specific.
The dist macro is defined in /etc/rpm/macros.dist. How would I undefine it while building source RPMs?
foo.spec:
Name: foo
Version: 0.1
Release: 1%{?dist}
# etc...
Build command:
$ rpmbuild -bs foo.spec
$ ls ../SRPMS
$ foo-0.1-1.el6.src.rpm
simple:
$ rpmbuild --undefine dist -bs foo.spec
I don't believe there's a way to conditionally check for what kind of package you are building, but you could try defining a macro at the command line only when building source packages. You would then conditionally checking for the macro at the top of your spec file:
$ rpmbuild -bs --define "mymacro 1" mypackage.spec
mypackage.spec:
%if 0%{?mymacro}
%undefine dist
%endif
To build a binary package, just omit the --define "mymacro 1":
$ rpmbuild -bb mypackage.spec
Related
I'm trying to install Grothendieck_Schemes from
https://www.isa-afp.org/entries/Grothendieck_Schemes.html#
I downloaded the tar and uncompressed it. The contents are
$ ls -l
Comm_Ring.thy
Group_Extras.thy
ROOT
Scheme.thy
Set_Extras.thy
Topological_Space.thy
document/
root.bib
root.tex
Then I run
$ isabelle components -u ./Downloads/Grothendieck_Schemes/
Added component "/home/username/Downloads/Grothendieck_Schemes"
but when I start Isabelle I get error
C:\Users\Aleksander\.isabelle\Isabelle2022\jedit\jars\isabelle_jedit_main.jar:
Cannot start:
*** Bad imports session "Jacobson_Basic_Algebra" for "Grothendieck_Schemes" (line 3 of "/home/username/Downloads/Grothendieck_Schemes/ROOT")
So I suppose I should install the dependency "Jacobson_Basic_Algebra"? But there is no such package on AFP. I'm thinking that maybe I should have removed the ROOT file. It's contents are
chapter AFP
session "Grothendieck_Schemes" (AFP) = HOL +
options [timeout = 600]
sessions
"Jacobson_Basic_Algebra"
theories
Scheme
document_files
"root.tex"
"root.bib"
I tried running this instead but it fails
$ isabelle components -u ./Downloads/Grothendieck_Schemes/*.thy
*** Bad component directory: "/home/username/Downloads/Grothendieck_Schemes/Comm_Ring.thy"
How do I install these packages properly?
Here's what I'm doing:
I have a blog that uses blogdown to render .Rmd files.
Some of the code snippets in the blog are in Python. I'm using reticulate for that.
I'm using a GitHub workflow to build and publish the blog as part of a larger website. This workflow sets up the environment and package dependencies in miniconda.
The last time this ran was six months ago. At that time, it worked. Now, it does not. I can't seem to replicate the behavior locally for more detailed debugging.
It seems to be trying to put a mamba command into normalizePath instead of a filesystem path (www-main is the name of the repository):
conda activate www-main
Rscript -e 'blogdown::build_site(local=FALSE, run_hugo=FALSE, build_rmd="content/blog/2020-08-28-api.Rmd")'
shell: /usr/bin/bash -l {0}
env:
CONDA_PKGS_DIR: /home/runner/conda_pkgs_dir
Rendering content/blog/2020-08-28-api.Rmd...
[...]
Quitting from lines 401-410 (2020-08-28-api.Rmd)
Error in normalizePath(conda, winslash = "/", mustWork = TRUE) :
path[1]="# cmd: /usr/share/miniconda/condabin/mamba update --name www-main --file /home/runner/work/www-main/www-main/conda": No such file or directory
Calls: local ... python_munge_path -> get_python_conda_info -> normalizePath
Execution halted
Error: Failed to render content/blog/2020-08-28-api.Rmd
Execution halted
Lines 401-410 of 2020-08-28-api.Rmd are a Python code block:
400 ```{python python-data, dev='svg'}
401 import covidcast
402 from datetime import date
403 import matplotlib.pyplot as plt
404
405 data = covidcast.signal("fb-survey", "smoothed_hh_cmnty_cli",
406 date(2020, 9, 8), date(2020, 9, 8),
407 geo_type="state")
408 covidcast.plot_choropleth(data, figsize=(7, 5))
409 plt.title("% who know someone who is sick, Sept 8, 2020")
410 ```
The useful bits of the output of conda info, in case it helps:
active environment : www-main
active env location : /usr/share/miniconda/envs/www-main
shell level : 1
user config file : /home/runner/.condarc
populated config files : /home/runner/.condarc
conda version : 4.12.0
conda-build version : not installed
python version : 3.9.12.final.0
virtual packages : __linux=5.15.0=0
__glibc=2.31=0
__unix=0=0
__archspec=1=x86_64
base environment : /usr/share/miniconda (writable)
conda av data dir : /usr/share/miniconda/etc/conda
conda av metadata url : None
channel URLs : https://conda.anaconda.org/conda-forge/linux-64
https://conda.anaconda.org/conda-forge/noarch
https://repo.anaconda.com/pkgs/main/linux-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/linux-64
https://repo.anaconda.com/pkgs/r/noarch
package cache : /home/runner/conda_pkgs_dir
envs directories : /usr/share/miniconda/envs
/home/runner/.conda/envs
platform : linux-64
user-agent : conda/4.12.0 requests/2.27.1 CPython/3.9.12 Linux/5.15.0-1020-azure ubuntu/20.04.5 glibc/2.31
UID:GID : 1001:121
netrc file : None
offline mode : False
I found this, but their workaround doesn't make sense for me since I'm not using papermill: https://github.com/rstudio/reticulate/issues/1184
I found this, but my paths don't have spaces: https://github.com/rstudio/reticulate/issues/1149
I found this, but their problem includes an entirely reasonable value for path[1], unlike mine: How can I tell R where the conda environment is via a docker image?
The build environment for this is a bit of a bear but I can probably put together a minimum working (/nonworking) example if needed, lmk
I tracked this down to at least two bits of weird/buggy behavior in reticulate and found a workaround: switch from vanilla miniconda to Mambaforge.
The TL;DR seems to be that whatever wacky ubuntu-latest setup-miniconda#v2 environment started putting into meta/history doesn't include a create line, which is what reticulate needs in order to figure out which conda goes with which python, because (1) it ignores the reticulate.conda_binary setting for some reason, and (2) it uses a more restrictive regex to parse the lines of the history file than the regex it uses to select them. Mambaforge does include the create line, so reticulate is happy.
- uses: conda-incubator/setup-miniconda#v2
with:
python-version: 3.9
activate-environment: www-main
miniforge-variant: Mambaforge
miniforge-version: latest
use-mamba: true
use-only-tar-bz2: true # (for caching support)
- name: Update environment
run: mamba env update -n www-main -f environment.yml
I'm trying to use llvm binding in ocaml, in my file test.ml, I have one line of code:
open Llvm
When I run the command
ocamlbuild -use-ocamlfind test.byte -package llvm
I get this result:
+ ocamlfind ocamldep -package llvm -modules test.ml > test.ml.depends
ocamlfind: Package `llvm' not found
Command exited with code 2.
Compilation unsuccessful after building 1 target (0 cached) in 00:00:00.
What did I do wrong in this? Thanks.
BTW, the _tag file contains:
"src": traverse
<src/{lexer,parser}.ml>: use_camlp4, pp(camlp4of)
<*.{byte,native}>: g++, use_llvm, use_llvm_analysis
myocamlbuild.ml contains:
open Ocamlbuild_plugin;;
ocaml_lib ~extern:true "llvm";;
ocaml_lib ~extern:true "llvm_analysis";;
flag ["link"; "ocaml"; "g++"] (S[A"-cc"; A"g++"]);;
I don't know why the instructions that you're using are so complex. You don't have to do anything like this to use llvm bindings in OCaml, provided you have installed them via opam.
Here is the recipe:
Install llvm bindings via opam.
it could be as simple as
opam install llvm
However, opam may try to install the latest version that is not available on your system, so pick a particular version, that you have and do the following (suppose you have llvm-3.8):
opam install conf-llvm.3.8
opam install llvm --criteria=-changed
(The -criteria flag will prevent opam from upgrading conf-llvm to the newest version)
Once it succeeds, you can easily compile your programs without any additional scaffolding.
Create and build your project
create a fresh new folder, e.g.,
mkdir llvm-project
cd llvm-project
create a sample application (borrowed from some tutorial, that I've found online):
cat >test.ml<<EOF
open Llvm
let _ =
let llctx = Llvm.global_context () in
let llmem = Llvm.MemoryBuffer.of_file Sys.argv.(1) in
let llm = Llvm_bitreader.parse_bitcode llctx llmem in
Llvm.dump_module llm ;
()
EOF
compile it for bytecode
ocamlbuild -pkgs llvm,llvm.bitreader test.byte
or to the native code
ocamlbuild -pkgs llvm,llvm.bitreader test.native
run it
./test.native mycode.bc
PROBLEM:
I am having difficulty running Healpix-IDL routines with GDL with the current version of Healpix, Healpix_3.20.
The easiest thing to do would be to follow user gilo in this post:
http://sourceforge.net/p/gnudatalanguage/discussion/338692/thread/6546b9ad/?limit=25#324d
All Healpix IDL routines are downloaded in ~/user/downloads/Healpix_3.20/src/idl
Then, use !PATH i.e.:
GDL> !PATH = expand_path('+/user/myname/downloads/HEALPix_3.20/')+':'+!PATH
and after that you have access to all healpix procedures within gdl
That doesn't work for me. I try the command hidl and hididle in the Terminal (I'm using Mac OS X Yosemite, 10.10.5):
GDL> hidl
% Procedure not found: HIDL
% Execution halted at: $MAIN$
Any other solutions?
POSSIBLE SOLUTIONS:
In the installation procedures install.pdf, Section 7.6 hidl usage describes that hidl is sometimes not recognized. A fix is setting the environment variable IDL STARTUP to be equal to the HEALPix startup file HEALPix startup including the directory path to the file, i.e. use
setenv IDL_STARTUP /disk1/user1/HEALPix_2.15a/src/idl/HEALPix_startup for C shell, csh
export IDL_STARTUP="+/disk1/user1/HEALPix_2.15a/src/idl/HEALPix_startup" for s, sh, bash
For my routines, this should be
export IDL_STARTUP="+/usr/downloads/HEALPix_3.20/src/idl/HEALPix_startup"
on bash Terminal
(Recall syntax:
export key=value is sh, ksh, bash
setenv key value is csh)
This doesn't work for me. After executing the command, and entering gdl, I get:
% Error opening startup file: /user/myname/downloads/HEALPix_3.20/src/idl/HEALPix_startup
Following Section 7.8 Using GDL instead of IDL, I try
$ export IDL_TMPDIR=/tmp
$ gdl
This doesn't work either.
Following Using HEALPix IDL together with other IDL libraries in the IDL routines manual, idl.pdf, I try
export IDL_PATH="+/user/myname/downloads/HEALPix_3.20/src/idl/:+/opt/local/share/gnudatalanguage/lib:<IDL_DEFAULT>"
export IDL_STARTUP="+/user/myname/downloads/HEALPix_3.20/src/idl/HEALPix_startup"gdl`
output error:
% Error opening startup file: /user/myname/downloads/HEALPix_3.20/src/idl/HEALPix_startup.
I try
export IDL_PATH="+/opt/local/share/gnudatalanguage/lib:<IDL_DEFAULT>"
hidl
output error:
-bash: hidl: command not found
Nothing works.
BACKGROUND:
Healpix has the installation procedures here, at source forge.net: healpix.sourceforge.net/pdf/install.pdf
and the IDL routines here: healpix.sourceforge.net/pdf/idl.pdf
The sourcecode is here: sourceforge.net/projects/healpix/
In order to install Healpix, you use ./configure and then make. (See install.pdf, section 4)
Healpix IDL routines are downloaded in /user/myname/downloads/HEALPix_3.20/
GDL routines are located in /opt/local/share/gnudatalanguage/lib/
hidl is an alias to start IDL with the Healpix startup file and path. Type it on the system command line, not the IDL command line. You must run through their configure system to define hidl.
In subdirectory ~/.healpix/3_20_Darwin there are two files, config and idl.sh.
The config is
# configuration for Healpix 3.20
HEALPIX=/Users/myname/downloads/Healpix_3.20 ; export HEALPIX
HPX_CONF_DIR=/Users/myname/.healpix/3_20_Darwin
if [ -r ${HPX_CONF_DIR}/idl.sh ] ; then . ${HPX_CONF_DIR}/idl.sh ; fi
if [ -r ${HPX_CONF_DIR}/f90.sh ] ; then . ${HPX_CONF_DIR}/f90.sh ; fi
if [ -r ${HPX_CONF_DIR}/cpp.sh ] ; then . ${HPX_CONF_DIR}/cpp.sh ; fi
if [ -r ${HPX_CONF_DIR}/c.sh ] ; then . ${HPX_CONF_DIR}/c.sh ; fi
The idl.sh file is
# IDL configuration for HEALPix Fri MONTH DAY TIME EDT YEAR
# make sure IDL related variables are global
export IDL_PATH IDL_STARTUP
# back up original IDL config, or give default value
OIDL_PATH="${IDL_PATH-<IDL_DEFAULT>}"
OIDL_STARTUP="${IDL_STARTUP}"
# create Healpix IDL config, and return to original config after running Healpix-enhanced IDL
HIDL_PATH="+${HEALPIX}/src/idl:${OIDL_PATH}"
HIDL_STARTUP="${HEALPIX}/src/idl/HEALPix_startup"
alias hidl="IDL_PATH=\"${HIDL_PATH}\" ; IDL_STARTUP=${HIDL_STARTUP} ; idl ; IDL_PATH=\"${OIDL_PATH}\" ; IDL_STARTUP=${OIDL_STARTUP} "
alias hidlde="IDL_PATH=\"${HIDL_PATH}\" ; IDL_STARTUP=${HIDL_STARTUP} ; idlde ; IDL_PATH=\"${OIDL_PATH}\" ; IDL_STARTUP=${OIDL_STARTUP} "
So, if I manually set the paths in this idl.sh file and run config, i.e ~/.config. This should then allow one to use hidl in the command line to run the Healpix IDL routines, right?
I have seen that,there is free source fsm generator named NunniFsm .For that you can refer
http://www.nunnisoft.ch/nunnifsmgen/en/home.jsp .I have downloaded the source package and
examples.The readme explain the steps to create fsm for java projects,But doesn't explains for C
and C++.If anybody used it before please explain the steps ,how I can use it for C and C++
projects.
Thanks In Advance..
Ok .I got it as the
java -jar NunniFSMGen.jar -?
specifies that
java -jar NunniFSMGen.jar [options] configurationFile
options:
-? : show usage
-V : show version
-i : show program info
-v : verbose
-o {java|c|c++} : language generated (default: java)
-p : package (default: read from file else ^generated')
-n : basename (default: read from file else 'Context')
configurationFile : file containing the FSM transition table
So by using option -o we can generate the code for either c ,c++ or Java.