python-xarray: open_dataarray Segmentation fault on HPC - netcdf

I'm struggling to understand why reading in a NetCDF file using open_dataarray in the HPC I use produces a Segmentation fault (core dumped). However, when I read in the file using open_dataarray on my Mac it works fine.
From looking into this further it is the NPac file (the sub-section) which I created which seems to have issues. Here is the steps I took to generate the file:
WaveWatchIII outputs a file
Extract July: $ncks -d time,0,30 in.nc out.nc Can open out.nc in HPC
Extract variable: $ncks -v hs in.nc out.nc Can open out.nc in HPC
Extract domain: $ncks -d longitude,100.0,290.0 -d latitude,0.0,65.0 in.nc out.nc Cannot open out.nc on HPC but can open out.nc on Mac.
This is the first time i've seen this issue and I believe it is due to properties of the domain of the NetCDF file. I'm guessing it may have something to do with versions as well? I do most of the heavy lifting on the HPC and use my Mac for testing and understanding so it would be nice to get this working on the HPC.
The NetCDF file can be downloaded here
ncdump -h ww3.Hs.July.NPac.nc
netcdf ww3.Hs.July.NPac {
dimensions:
time = UNLIMITED ; // (31 currently)
latitude = 66 ;
longitude = 191 ;
variables:
short hs(time, latitude, longitude) ;
hs:long_name = "significant height of wind and swell waves" ;
hs:standard_name = "sea_surface_wave_significant_height" ;
hs:globwave_name = "significant_wave_height" ;
hs:units = "m" ;
hs:_FillValue = -32767s ;
hs:scale_factor = 0.002f ;
hs:add_offset = 0.f ;
hs:valid_min = 0 ;
hs:valid_max = 32000 ;
float latitude(latitude) ;
latitude:units = "degree_north" ;
latitude:long_name = "latitude" ;
latitude:standard_name = "latitude" ;
latitude:valid_min = -90.f ;
latitude:valid_max = 90.f ;
latitude:axis = "Y" ;
float longitude(longitude) ;
longitude:units = "degree_east" ;
longitude:long_name = "longitude" ;
longitude:standard_name = "longitude" ;
longitude:valid_min = -180.f ;
longitude:valid_max = 180.f ;
longitude:axis = "X" ;
double time(time) ;
time:long_name = "julian day (UT)" ;
time:standard_name = "time" ;
time:units = "days since 1850-01-01T00:00:00Z" ;
time:conventions = "relative julian days with decimal part (as parts of the day )" ;
time:axis = "T" ;
// global attributes:
:WAVEWATCH_III_version_number = "4.18b" ;
:WAVEWATCH_III_switches = "NC4 F90 NOGRB NOPA LRB4 SHRD PR3 UQ FLX0 LN1 ST4 STAB0 NL1 BT1 DB1 MLIM TR0 BS0 IC0 REF0 XX0 WNT1 WNX1 CRT1 CRX1 O0 O1 O2 O3 O4 O5 O6 O7 O11 O14 TRKNC" ;
:SDS4\ namelist\ parameter\ WHITECAPWIDTH = 0.3f ;
:product_name = "ww3.199307.nc" ;
:area = "Indian Ocean Pacfic 1 degree" ;
:latitude_resolution = " 1.0000000" ;
:longitude_resolution = " 1.0000000" ;
:southernmost_latitude = "-70.0000000" ;
:northernmost_latitude = "65.0000000" ;
:westernmost_longitude = "20.0000000" ;
:easternmost_longitude = "295.0000000" ;
:minimum_altitude = "-12000 m" ;
:maximum_altitude = "9000 m" ;
:altitude_resolution = "n/a" ;
:start_date = "1993-07-01T00:00:00Z" ;
:stop_date = "1993-07-31T00:00:00Z" ;
:history = "Sat Dec 2 17:53:06 2017: ncks -O -d longitude,100.0,290.0 -d latitude,0.0,65.0 /projects/rsmas/kirtman/rxb826/WW3exps/IO_Pac_CCSM4/CCSM4_19930701_19940630_1a1/work/ww3.Hs.July.nc /projects/rsmas/kirtman/rxb826/WW3exps/IO_Pac_CCSM4/CCSM4_19930701_19940630_1a1/work/ww3.Hs.July.NPac.nc\nSat Dec 2 17:53:06 2017: ncks -O -v hs /projects/rsmas/kirtman/rxb826/WW3exps/IO_Pac_CCSM4/CCSM4_19930701_19940630_1a1/work/ww3.July.nc /projects/rsmas/kirtman/rxb826/WW3exps/IO_Pac_CCSM4/CCSM4_19930701_19940630_1a1/work/ww3.Hs.July.nc\nSat Dec 2 17:53:05 2017: ncks -O -d time,0,30 /projects/rsmas/kirtman/rxb826/WW3exps/IO_Pac_CCSM4/CCSM4_19930701_19940630_1a1/work/ww3.19930701_19940630.nc /projects/rsmas/kirtman/rxb826/WW3exps/IO_Pac_CCSM4/CCSM4_19930701_19940630_1a1/work/ww3.July.nc\nFri Nov 4 13:50:57 2016: ncrcat -O -o tmp.nc ww3.199307.nc ww3.199308.nc ww3.199309.nc ww3.199310.nc ww3.199311.nc ww3.199312.nc ww3.199401.nc ww3.199402.nc ww3.199403.nc ww3.199404.nc ww3.199405.nc ww3.199406.nc" ;
:nco_openmp_thread_number = 1 ;
:NCO = "4.3.7" ;
On the HPC:
$ nc-config --version
netCDF 4.2.1.1
>>> xr.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.3.final.0
python-bits: 64
OS: Linux
OS-release: 2.6.32-431.el6.x86_64
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
xarray: 0.10.0
pandas: 0.20.2
numpy: 1.13.1
scipy: 0.19.1
netCDF4: 1.2.4
h5netcdf: None
Nio: None
bottleneck: 1.2.1
cyordereddict: None
dask: 0.16.0
matplotlib: 2.0.2
cartopy: 0.15.1
seaborn: None
setuptools: 27.2.0
pip: 9.0.1
conda: 4.3.30
pytest: None
IPython: None
sphinx: None
>>> xr.open_dataarray('ww3.Hs.July.NPac.nc')
Segmentation fault (core dumped)
On my Mac:
$ nc-config --version
netCDF 4.4.1
xr.show_versions()
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.3.final.0
python-bits: 64
OS: Darwin
OS-release: 17.2.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: en_US.UTF-8
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
xarray: 0.10.0
pandas: 0.20.1
numpy: 1.12.1
scipy: 1.0.0
netCDF4: 1.2.4
h5netcdf: None
Nio: None
bottleneck: 1.2.1
cyordereddict: None
dask: 0.16.0
matplotlib: 2.1.0
cartopy: 0.15.1
seaborn: 0.8.1
setuptools: 36.2.7
pip: 9.0.1
conda: 4.3.29
pytest: 3.2.0
IPython: 6.2.1
sphinx: 1.5.6
In [10]:xr.open_dataarray('ww3.Hs.July.NPac.nc')
Out[10]:
<xarray.DataArray 'hs' (time: 31, latitude: 66, longitude: 191)>
[390786 values with dtype=float64]
Coordinates:
* latitude (latitude) float32 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 ...
* longitude (longitude) float32 100.0 101.0 102.0 103.0 104.0 105.0 106.0 ...
* time (time) datetime64[ns] 2013-07-01 2013-07-02 2013-07-03 ...
Attributes:
long_name: significant height of wind and swell waves
standard_name: sea_surface_wave_significant_height
globwave_name: significant_wave_height
units: m
valid_min: 0
valid_max: 32000

Apologies. It looks like my ncks command wasn't completing property. Nothing to do with xarray just strange my Mac wasn't throwing up an error. Stranger still that giving out.nc a different file name seems to fix it
$ncks -O -d longitude,100.0,290.0 -d latitude,0.0,65.0 ww3.Hs.July.nc ww3.Hs.July.NPac.nc
$python
>>>import xarray as xr
>>>xr.open_dataarray('ww3.Hs.July.NPac.nc')
Segmentation fault (core dumped)
$ncks -O -d longitude,100.0,290.0 -d latitude,0.0,65.0 ww3.Hs.July.nc ww3.Hs.July.NPac1.nc
$python
>>>import xarray as xr
>>>xr.open_dataarray('ww3.Hs.July.NPac1.nc')
<xarray.DataArray 'hs' (time: 31, latitude: 66, longitude: 191)>
[390786 values with dtype=float64]
Coordinates:
* latitude (latitude) float32 0.0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 ...
* longitude (longitude) float32 100.0 101.0 102.0 103.0 104.0 105.0 106.0 ...
* time (time) datetime64[ns] 1993-07-01 1993-07-02 1993-07-03 ...
Attributes:
long_name: significant height of wind and swell waves
standard_name: sea_surface_wave_significant_height
globwave_name: significant_wave_height
units: m
valid_min: 0
valid_max: 32000

Related

Snakemake: MissingInputException in snakemake pipeline

I'm trying a SnakeMake pipeline and I'm stucked on an error I really don't understand.
I've got a directory (raw_data) in which I have the input files :
ll /home/nico/labo/etudes/Optimal/data/raw_data
total 41M
drwxrwxr-x 2 nico nico 4,0K mars 6 16:09 ./
drwxrwxr-x 11 nico nico 4,0K mars 6 16:14 ../
-rw-rw-r-- 1 nico nico 15M févr. 27 12:21 sampleA_R1.fastq.gz
-rw-rw-r-- 1 nico nico 19M févr. 27 12:22 sampleA_R2.fastq.gz
-rw-rw-r-- 1 nico nico 3,4M févr. 27 12:21 sampleB_R1.fastq.gz
-rw-rw-r-- 1 nico nico 4,3M févr. 27 12:22 sampleB_R2.fastq.gz
This directory contains 4 files for 2 samples.
I created a config json file for the SnakeMake pipeline named config_snakemake_Optimal_mapping_BaL.json:
{
"fastqExtension": "fastq.gz",
"fastqDir": "/home/nico/labo/etudes/Optimal/data/raw_data",
"outputDir": "/home/nico/labo/etudes/Optimal/data/mapping_BaL",
"logDir": "logs",
"reference": {
"fasta": "/home/nico/labo/references/genomes/HIV1/BaL_AY713409/BaL_AY713409.fasta",
"index": "/home/nico/labo/references/genomes/HIV1/BaL_AY713409/BaL_AY713409.fasta.bwt"
}
}
And finally the SnakeMake file snakefile_bwa_samtools.py:
import subprocess
from os.path import join
### Globals ---------------------------------------------------------------------
# A Snakemake regular expression matching fastq files.
SAMPLES, = glob_wildcards(join(config["fastqDir"], "{sample}_R1."+config["fastqExtension"]))
print(SAMPLES)
### Rules -----------------------------------------------------------------------
# Pipeline output files
rule all:
input: expand(join(config["outputDir"], "{sample}.bam.bai"), sample=SAMPLES)
# Reads alignment on reference genome and BAM file creation
rule bwa_mem_to_bam:
input:
index = config["reference"]["index"],
fasta = config["reference"]["fasta"],
fq1_ID = "{sample}_R1."+config["fastqExtension"],
fq2_ID = "{sample}_R2."+config["fastqExtension"],
fq1 = join(config["fastqDir"], "{sample}_R1."+config["fastqExtension"]),
fq2 = join(config["fastqDir"], "{sample}_R2."+config["fastqExtension"])
output:
temp(join(config["outputDir"], "{sample}.bamUnsorted"))
version:
subprocess.getoutput(
"man bwa | tail -n 1 | cut -d ' ' -f 1 | cut -d '-' -f 2"
)
log:
join(config["outputDir"], config["logDir"], "{sample}.bwa_mem.log")
message:
"Alignment of {input.fq1_ID} and {input.fq2_ID} on {input.fasta} with BWA version {version}."
shell:
"bwa mem {input.fasta} {input.fq1} {input.fq2} 2> {log} | samtools view -Sbh - > {output}"
# Sorting the BAM files on genomic positions
rule bam_sort:
input:
join(config["outputDir"], "{sample}.bamUnsorted")
output:
join(config["outputDir"], "{sample}.bam")
log:
join(config["outputDir"], config["logDir"], "{sample}.samtools_sort.log")
version:
subprocess.getoutput(
"samtools --version | "
"head -1 | "
"cut -d' ' -f2"
)
message:
"Genomic sorting of {input} with samtools version {version}."
shell:
"samtools sort -f {input} {output} 2> {log}"
# Indexing the BAM files
rule bam_index:
input:
join(config["outputDir"], "{sample}.bam")
output:
join(config["outputDir"], "{sample}.bam.bai")
message:
"Indexing {input}."
shell:
"samtools index {input}"
I run this pipeline:
snakemake --cores 3 --snakefile /home/nico/labo/scripts/pipeline_illumina/snakefile_bwa_samtools.py --configfile /home/nico/labo/etudes/Optimal/data/snakemake_config_files/config_snakemake_Optimal_mapping_BaL.json
and I've got the following error outputs:
['sampleB', 'sampleA']
MissingInputException in line 18 of /home/nico/labo/scripts/pipeline_illumina/snakefile_bwa_samtools.py:
Missing input files for rule bwa_mem_to_bam:
sampleB_R1.fastq.gz
sampleB_R2.fastq.gz
or depending the moment:
['sampleB', 'sampleA']
PeriodicWildcardError in line 40 of /home/nico/labo/scripts/pipeline_illumina/snakefile_bwa_samtools.py:
The value _unsorted in wildcard sample is periodically repeated (sampleB_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted). This would lead to an infinite recursion. To avoid this, e.g. restrict the wildcards in this rule to certain values.
The samples are correctly detected as they appear in the list (first line of kind of outputs) and I'm surely messing around with the wildcards in the rule bwa_mem_to_bam, but I really don't get why..
Any clue?
I quickly looked your code.
Why didn't the first one work out?
Look when you declare fq1_ID and fq1, same for sample 2. You didn't assign the same string. For fq1 you add a repertory for the file witch is not present for fq1_ID so snakemake is searching it in the workdir (current directory if -d option is not set) a file name with your string. Beacuse these variables are in input section.
So by removing the two fq1/2_ID, it will erase all files searching problems.
Hugo
Finally, I succed with the pipeline removing the fq1_ID and fq2_ID variables in the rule bwa_mem_to_bam and replacing in the message of the rule input.fq1_ID and input.fq2_ID by input.fq1 and input.fq2.
The message is less elegant, but the pipeline is running correctly. Still doesn't understand exactly where was the mistake, if someone can explain, I'm still listening!
The correct code for rule bwa_mem_to_bam:
rule bwa_mem_to_bam:
input:
index = config["reference"]["index"],
fasta = config["reference"]["fasta"],
fq1 = join(config["fastqDir"], "{sample}_R1."+config["fastqExtension"]),
fq2 = join(config["fastqDir"], "{sample}_R2."+config["fastqExtension"])
output:
temp(join(config["outputDir"], "{sample}.bamUnsorted"))
version:
subprocess.getoutput(
"man bwa | tail -n 1 | cut -d ' ' -f 1 | cut -d '-' -f 2"
)
log:
join(config["outputDir"], config["logDir"], "{sample}.bwa_mem.log")
message:
"Alignment of {input.fq1} and {input.fq2} on {input.fasta} with BWA version {version}."
shell:
"bwa mem {input.fasta} {input.fq1} {input.fq2} 2> {log} | samtools view -Sbh - > {output}"
Thanks Hugo for checking my code and your explanation, it makes sense!
I finally get a flash idea waking up this morning (the best ones), and realized that I neglected the params part of the rule, fq1_ID and fq2_ID are not inputs but params..
I changed the code to that:
rule bwa_mem_to_bam:
input:
index = config["reference"]["index"],
fasta = config["reference"]["fasta"],
fq1 = join(config["fastqDir"], "{sample}_R1.fastq.gz"),
fq2 = join(config["fastqDir"], "{sample}_R2.fastq.gz")
output:
temp(join(config["outputDir"],"{sample}_unsorted.bam"))
params:
fq1_ID = "{sample}_R1.fastq.gz",
fq2_ID = "{sample}_R2.fastq.gz",
ref_ID = os.path.basename(config["reference"]["fasta"])
version:
subprocess.getoutput(
"man bwa | tail -n 1 | cut -d ' ' -f 1 | cut -d '-' -f 2"
)
log:
join(config["outputDir"], config["logDir"], "{sample}.bwa_mem.log")
message:
"Alignment of {params.fq1_ID} and {params.fq2_ID} on {params.ref_ID} with BWA version {version}."
shell:
"bwa mem {input.fasta} {input.fq1} {input.fq2} 2> {log} | samtools view -Sbh - > {output}"
And it works just fine!
snakemake --cores 3 --snakefile /home/nico/labo/scripts/pipeline_illumina/snakefile_bwa_samtools.py --configfile /home/nico/labo/etudes/Optimal/data/snakemake_config_files/config_snakemake_Optimal_mapping_BaL.json
Provided cores: 3
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 all
2 bam_index
2 bam_sort
2 bwa_mem_to_bam
7
Alignment of sampleB_R1.fastq.gz and sampleB_R2.fastq.gz on BaL_AY713409.fasta with BWA version 0.7.12.
Alignment of sampleA_R1.fastq.gz and sampleA_R2.fastq.gz on BaL_AY713409.fasta with BWA version 0.7.12.
1 of 7 steps (14%) done
Genomic sorting of sampleB_unsorted.bam with samtools version 1.2.
Removing temporary output file /home/nico/labo/etudes/Optimal/data/mapping_BaL/sampleB_unsorted.bam.
2 of 7 steps (29%) done
Indexing sampleB.bam.
3 of 7 steps (43%) done
4 of 7 steps (57%) done
Genomic sorting of sampleA_unsorted.bam with samtools version 1.2.
Removing temporary output file /home/nico/labo/etudes/Optimal/data/mapping_BaL/sampleA_unsorted.bam.
5 of 7 steps (71%) done
Indexing sampleA.bam.
6 of 7 steps (86%) done
localrule all:
input: /home/nico/labo/etudes/Optimal/data/mapping_BaL/sampleB.bam.bai, /home/nico/labo/etudes/Optimal/data/mapping_BaL/sampleA.bam.bai
7 of 7 steps (100%) done
And finally get my correct messages:
Alignment of sampleB_R1.fastq.gz and sampleB_R2.fastq.gz on
BaL_AY713409.fasta with BWA version 0.7.12.
Alignment of sampleA_R1.fastq.gz and sampleA_R2.fastq.gz on BaL_AY713409.fasta
with BWA version 0.7.12.

%R magic no longer works in IPython or Jupyter following update to Canopy Ver 1.7.4.3348

Previously %R and %%R magics were working in IPython and Jupyter python notebooks.
The R terminal version is:
R version 3.3.1 (2016-06-21) -- "Bug in Your Hair"
Copyright (C) 2016 The R Foundation for Statistical Computing
Platform: x86_64-apple-darwin13.4.0 (64-bit)
After upgrading to Version 1.7.4.3348 in Enthought Canopy, the notebooks and IPython no longer work. I have tried reinstalling following Installing RKernel and http://irkernel.github.io/installation/, which worked before. I run the command to load the R-extension as per
%load_ext rpy2.ipython
I get the error message as follows:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-2-691c6d73b073> in <module>()
----> 1 get_ipython().magic(u'load_ext rpy2.ipython')
/Users/Llewelyn_home/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/IPython/core/interactiveshell.pyc in magic(self, arg_s)
2161 magic_name, _, magic_arg_s = arg_s.partition(' ')
2162 magic_name = magic_name.lstrip(prefilter.ESC_MAGIC)
-> 2163 return self.run_line_magic(magic_name, magic_arg_s)
2164
2165 #-------------------------------------------------------------------------
/Users/Llewelyn_home/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/IPython/core/interactiveshell.pyc in run_line_magic(self, magic_name, line)
2082 kwargs['local_ns'] = sys._getframe(stack_depth).f_locals
2083 with self.builtin_trap:
-> 2084 result = fn(*args,**kwargs)
2085 return result
2086
<decorator-gen-64> in load_ext(self, module_str)
/Users/Llewelyn_home/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/IPython/core/magic.pyc in <lambda>(f, *a, **k)
191 # but it's overkill for just that one bit of state.
192 def magic_deco(arg):
--> 193 call = lambda f, *a, **k: f(*a, **k)
194
195 if callable(arg):
/Users/Llewelyn_home/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/IPython/core/magics/extension.pyc in load_ext(self, module_str)
64 if not module_str:
65 raise UsageError('Missing module name.')
---> 66 res = self.shell.extension_manager.load_extension(module_str)
67
68 if res == 'already loaded':
/Users/Llewelyn_home/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/IPython/core/extensions.pyc in load_extension(self, module_str)
82 if module_str not in sys.modules:
83 with prepended_to_syspath(self.ipython_extension_dir):
---> 84 __import__(module_str)
85 mod = sys.modules[module_str]
86 if self._call_load_ipython_extension(mod):
/Users/Llewelyn_home/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/rpy2/ipython/__init__.py in <module>()
----> 1 from .rmagic import load_ipython_extension
/Users/Llewelyn_home/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/rpy2/ipython/rmagic.py in <module>()
57 template_converter = ro.conversion.converter
58 try:
---> 59 from rpy2.robjects import pandas2ri as baseconversion
60 template_converter = template_converter + baseconversion.converter
61 except ImportError:
/Users/Llewelyn_home/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/rpy2/robjects/pandas2ri.py in <module>()
7 INTSXP)
8
----> 9 from pandas.core.frame import DataFrame as PandasDataFrame
10 from pandas.core.series import Series as PandasSeries
11 from pandas.core.index import Index as PandasIndex
/Users/Llewelyn_home/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/__init__.py in <module>()
20
21 # numpy compat
---> 22 from pandas.compat.numpy_compat import *
23
24 try:
/Users/Llewelyn_home/Library/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas/compat/numpy_compat.py in <module>()
13
14 # numpy versioning
---> 15 _np_version = np.version.short_version
16 _np_version_under1p8 = LooseVersion(_np_version) < '1.8'
17 _np_version_under1p9 = LooseVersion(_np_version) < '1.9'
AttributeError: 'module' object has no attribute 'version'
Could it be related to Canopy version of numpy being listed as 1.10.4-1 and the np.version result being 1.11.1 (based on error message)? Any suggestions gratefully received. PS. R works in the console still, plus in terminal and in Jupyter with an R kernel...
The support crew at Enthought examined the version of numpy, then pandas. Reinstalling both did not solve the problem. The unexplained resolution occurred from the pip install theano --upgrade command on the Canopy Terminal. Logging this error as an unexplained issue with %R but with the strong indication that it is about dependency version.

yosys fails at ABC pass (on counter.v demo)

I hope someone can help me with this...
This is my first encounter with yosys. For the start, I'm trying to run the very same demo as Clifford explained in his presentation. I downloaded the demo at the following location: https://github.com/cliffordwolf/yosys/tree/master/manual/PRESENTATION_Intro
yosys run beaks at the ABC pass with following message:
12. Executing ABC pass (technology mapping using ABC).
12.1. Extracting gate netlist of module `\counter' to `<abc-temp-dir>/input.blif'..
Extracted 6 gates and 12 wires to a netlist network with 4 inputs and 2 outputs.
12.1.1. Executing ABC.
Running ABC command: <yosys-exe-dir>/yosys-abc -s -f <abc-temp-dir>/abc.script 2>&1
ABC: ABC command line: "source <abc-temp-dir>/abc.script".
ABC:
ABC: + read_blif <abc-temp-dir>/input.blif
ABC: + read_lib -w /home/boris/Documents/Self Learning/yosys_synthesys/mycells.lib
ABC: usage: read_lib [-SG float] [-M num] [-dnvwh] <file>
ABC: reads Liberty library from file
ABC: -S float : the slew parameter used to generate the library [default = 0.00]
ABC: -G float : the gain parameter used to generate the library [default = 0.00]
ABC: -M num : skip gate classes whose size is less than this [default = 0]
ABC: -d : toggle dumping the parsed library into file "*_temp.lib" [default = no]
ABC: -n : toggle replacing gate/pin names by short strings [default = no]
ABC: -v : toggle writing verbose information [default = yes]
ABC: -v : toggle writing information about skipped gates [default = yes]
ABC: -h : prints the command summary
ABC: <file> : the name of a file to read
ABC: ** cmd error: aborting 'source <abc-temp-dir>/abc.script'
ERROR: Can't open ABC output file `/tmp/yosys-abc-KDGya6/output.blif'.
[boris#E7440 yosys_synthesys]$
I have had a look at the file location mentioned in the error statement above, there is no output.blif in there:
[boris#E7440 yosys_synthesys]$ ll /tmp/yosys-abc-KDGya6/
total 12K
-rw-rw-r--. 1 boris boris 542 Jul 5 11:21 abc.script
-rw-rw-r--. 1 boris boris 526 Jul 5 11:21 input.blif
-rw-rw-r--. 1 boris boris 852 Jul 5 11:21 stdcells.genlib
[boris#E7440 yosys_synthesys]$
Buy the way, here is some system/tools info that might be relevant for debugging:
Linux E7440.DELL 4.4.13-200.fc22.x86_64 #1 SMP Wed Jun 8 15:59:40 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
Yosys 0.6+141 (git sha1 080f95f, gcc 5.3.1 -fPIC -Os)
UC Berkeley, ABC 1.01 (compiled Mar 8 2015 01:00:49)
The issue has been resolved...
Solution =
Changed rundir from:
/home/boris/Documents/Self Learning/yosys_synthesys/mycells.lib
to:
/home/boris/Documents/SelfLearning/yosys_synthesys/mycells.lib
Lesson learned =
ABC tool does not accept space characters in the path/file name.

R character value is null, but fails is.null() test

Very strange error on a value being not null, but is claimed to be null, even though it evaluates to FALSE on is.null() test. See below. In this case, pid seems to be null, but the test fails, causing me all sorts of 'next step' problems in the code.
> pid <- system2('ps', args = "-ef | grep 'ssh -f' | grep -v grep | tr -s ' ' | \ cut -d ' ' -f 3", stdout = TRUE)
> pid
character(0)
> is.null(pid)
[1] FALSE
> if(!is.null(pid) && nchar(pid)) {cat('got some pid')}
Error in if (!is.null(pid) && nchar(pid)) { :
missing value where TRUE/FALSE needed
> if(!is.null(pid)) {cat('got some pid? Really?')}
got some pid? Really?
What does folks think is happening here? Here is my version information of R:
> version
_
platform x86_64-pc-linux-gnu
arch x86_64
os linux-gnu
system x86_64, linux-gnu
status
major 3
minor 2.2
year 2015
month 08
day 14
svn rev 69053
language R
version.string R version 3.2.2 (2015-08-14)
nickname Fire Safety
Full version of the OS:
Linux rserver 3.16.0-44-generic #59~14.04.1-Ubuntu SMP Tue Jul 7 15:07:27 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
In the end, I simply want this code to run:
> if (nchar(pid) > 0) {
+ cat('do something\n')
+ }
Error in if (nchar(pid) > 0) { : argument is of length zero
The fact that you have an empty character variable doesn't mean that it's NULL. Here's an example:
pid <- character()
> pid
character(0)
> is.null(pid)
[1] FALSE
> pid <- NULL
> pid
NULL
> is.null(pid)
[1] TRUE

How can I find the current date minus seven days in Unix?

I am trying to find the date that was seven days before today.
CURRENT_DT=`date +"%F %T"`
diff=$CURRENT_DT-7
echo $diff
I am trying stuff like the above to find the 7 days less than from current date. Could anyone help me out please?
GNU date will to the math for you:
date --date "7 days ago"
Other version will require you to covert the current date into seconds since the UNIX epoch first, manually subtract 7 days' worth of seconds, and convert that back into the desired form. Consult the documentation for your version of date for details on how to convert to and from Unix timestamps. Here's an example using GNU date again:
x=$(date +%s)
x=$((x - 7 * 24 * 60 * 60))
date --date #$x
Here is a simple Perl script which (unlike the other examples) works with Unix:
perl -e 'use POSIX qw(ctime); printf "%s", ctime(time - (7 * 24 * 60 * 60));'
(Tested with Solaris 10, and a token Linux system, of course - with the caveat that Perl is not necessarily part of one's configuration, merely very likely).
Adding this one for shells on OSX:
date -v-7d
> Tue Apr 3 15:16:31 EDT 2018
date
> Tue Apr 10 15:16:33 EDT 2018
Need that formated?
date -v-7d +%Y-%m-%d
> 2018-04-03
Ksh's printf can do time calculation:
$ printf '%(%Y-%m-%d)T\n'
2015-04-07
$ printf '%(%Y-%m-%d)T\n' '7 days ago'
2015-03-31
$
I haven't used unix in a while but I found this in one of my scripts
echo `date +%s`-604800 | bc
DATE=$(date --date "7 days ago" | awk '{print$1,$2,$3}')
echo "$DATE"
if [ -z "$(grep -i "$DATE" test.log)" ]; then
exit 1
fi
sed -i "1,/$DATE/d" test.log

Resources