vlog-7 error. failed to open design unit file in read mode - modelsim

i am trying to run an example design by IntelFPGA, using a tcl script provided by Intel. It reports 'error (vlog-7) failed to open unit file "blabla" in read mode. No such file or directory (errno = ENOENT). i can not find the error. Then i tried a modelsim project i used earlier, which used to work. I get the same error! What could be wrong?
At start of tb_run.tcl (run from commandline MODELSIM> VHDL 2008 do tb_run.tcl), this tb_run.tcl file is generated by quartus example
global env ;
# set QUARTUS_INSTALL_DIR "$env(QUARTUS_ROOTDIR)" => initially i thought
# there is something wrong with the rootdir, so i changed to the row
# below:
set QUARTUS_INSTALL_DIR "C:/intelFPGA/18.0/quartus"
set SETUP_SCRIPTS ../setup_scripts
set tb_top_waveform msim_wave.do
set QSYS_SIMDIR "./../setup_scripts"
set TOP_LEVEL_NAME tb_top
source $SETUP_SCRIPTS/mentor/msim_setup.tcl
# Compile device library files
dev_com
================================================
=> dev_com, this is line 29, here the error occurs, see below
modelsim reports:
Modelsim> VHDL 2008 do tb_run.tcl
# C:/intelFPGA/18.0/quartus
# ../setup_scripts
# msim_wave.do
# ./../setup_scripts
# tb_top
# [exec] file_copy
# List Of Command Line Aliases
#
# file_copy -- Copy ROM/RAM files to simulation directory
#
# dev_com -- Compile device library files
#
# com -- Compile the design files in correct order
#
# elab -- Elaborate top level design
#
# elab_debug -- Elaborate the top level design with novopt option
#
# ld -- Compile all the design files and elaborate the top level design
#
# ld_debug -- Compile all the design files and elaborate the top level design with -novopt
#
#
#
# List Of Variables
#
# TOP_LEVEL_NAME -- Top level module name.
# For most designs, this should be overridden
# to enable the elab/elab_debug aliases.
#
# SYSTEM_INSTANCE_NAME -- Instantiated system module name inside top level module.
#
# QSYS_SIMDIR -- Qsys base simulation directory.
#
# QUARTUS_INSTALL_DIR -- Quartus installation directory.
#
# USER_DEFINED_COMPILE_OPTIONS -- User-defined compile options, added to com/dev_com aliases.
#
# USER_DEFINED_VHDL_COMPILE_OPTIONS -- User-defined vhdl compile options, added to com/dev_com aliases.
#
# USER_DEFINED_VERILOG_COMPILE_OPTIONS -- User-defined verilog compile options, added to com/dev_com aliases.
#
# USER_DEFINED_ELAB_OPTIONS -- User-defined elaboration options, added to elab/elab_debug aliases.
#
# SILENCE -- Set to true to suppress all informational and/or warning messages in the generated simulation script.
#
# FORCE_MODELSIM_AE_SELECTION -- Set to true to force to select Modelsim AE always.
# [exec] dev_com
# Model Technology ModelSim - Intel FPGA Edition vlog 10.6c Compiler 2017.07 Jul 26 2017
# Start time: 10:43:14 on Jun 25,2019
# vlog -reportprogress 300 C:/intelFPGA/18.0/quartus/eda/sim_lib/altera_primitives.v -work altera_ver
# ** Error: (vlog-7) Failed to open design unit file "C:/intelFPGA/18.0/quartus/eda/sim_lib/altera_primitives.v" in read mode.
# No such file or directory. (errno = ENOENT)
# End time: 10:43:14 on Jun 25,2019, Elapsed time: 0:00:00
# Errors: 1, Warnings: 0
# ** Error: C:/intelFPGA_pro/18.0/modelsim_ase/win32aloem/vlog failed.
# Error in macro ./tb_run.tcl line 29
# C:/intelFPGA_pro/18.0/modelsim_ase/win32aloem/vlog failed.
# while executing
# "vlog C:/intelFPGA/18.0/quartus/eda/sim_lib/altera_primitives.v -work altera_ver"
# ("eval" body line 1)
# invoked from within
# "eval vlog $USER_DEFINED_VERILOG_COMPILE_OPTIONS $USER_DEFINED_COMPILE_OPTIONS "$QUARTUS_INSTALL_DIR/eda/sim_lib/altera_primitives.v" ..."
# invoked from within
# "if [string is false -strict [modelsim_ae_select $FORCE_MODELSIM_AE_SELECTION]] {
# eval vlog $USER_DEFINED_VERILOG_COMPILE_OPTIONS $USER_DEFINED_CO..."
# ("eval" body line 5)
# invoked from within
# "dev_com "

Don't really know how you managed to solve this problem simply by rebooting. Here is what the problem most likely was about and how I solved it.
We have a statement about the problem:
vlog-7 error. failed to open design unit file in read mode
I was too focused on the "read mode" and tried to figure out why is it forbidden to start declared operation.
However the problem most likely is that you dont really have the actual file in the directory where warning occured.
Then I realised that I eventually changed the name of the folder and thus this file could not be read. Take a look that you didn't make any changes in pathing till your tcl, .do and project files.

Related

Snakemake wildcards: Using wildcarded files from directory output

I'm new to Snakemake and try to use specific files in a rule, from the directory() output of another rule that clones a git repo.
Currently, this gives me an error Wildcards in input files cannot be determined from output files: 'json_file', and I don't understand why. I have previously worked through the tutorial at https://carpentries-incubator.github.io/workflows-snakemake/index.html.
The difference between my workflow and the tutorial workflow is that I want to create the data I use later in the first step, whereas in the tutorial, the data was already there.
Workflow description in plain text:
Clone a git repository to path {path}
Run a script {script} on every single JSON files in the directory {path}/parsed/ in parallel to produce the aggregate result {result}
GIT_PATH = config['git_local_path'] # git/
PARSED_JSON_PATH = f'{GIT_PATH}parsed/'
GIT_URL = config['git_url']
# A single parsed JSON file
PARSED_JSON_FILE = f'{PARSED_JSON_PATH}{{json_file}}.json'
# Build a list of parsed JSON file names
PARSED_JSON_FILE_NAMES = glob_wildcards(PARSED_JSON_FILE).json_file
# All parsed JSON files
ALL_PARSED_JSONS = expand(PARSED_JSON_FILE, json_file=PARSED_JSON_FILE_NAMES)
rule all:
input: 'result.json'
rule clone_git:
output: directory(GIT_PATH)
threads: 1
conda: f'{ENVS_DIR}git.yml'
shell: f'git clone --depth 1 {GIT_URL} {{output}}'
rule extract_json:
input:
cmd='scripts/extract_json.py',
json_file=PARSED_JSON_FILE
output: 'result.json'
threads: 50
shell: 'python {input.cmd} {input.json_file} {output}'
Running only clone_git works fine (if I set an all input of GIT_PATH).
Why do I get the error message? Is this because the JSON files don't exist when the workflow is started?
Also - I don't know if this matters - this is a subworkflow used with module.
What you need seems to be a checkpoint rule which is first executed and only then snakemake determines which .json files are present and runs your extract/aggregate functions. Here's an example adapted:
I'm struggling to fully understand the file and folder structure you get after cloning your git repo. So I have fallen back to the best practices by Snakemake of using resources for downloaded and results for created files.
You'll need to re-adjust those paths to match your case again:
GIT_PATH = config["git_local_path"] # git/
GIT_URL = config["git_url"]
checkpoint clone_git:
output:
git=directory(GIT_PATH),
threads: 1
conda:
f"{ENVS_DIR}git.yml"
shell:
f"git clone --depth 1 {GIT_URL} {{output.git}}"
rule extract_json:
input:
cmd="scripts/extract_json.py",
json_file="resources/{file_name}.json",
output:
"results/parsed_files/{file_name}.json",
shell:
"python {input.cmd} {input.json_file} {output}"
def get_all_json_file_names(wildcards):
git_dir = checkpoints.clone_git.get(**wildcards).output["git"]
file_names = glob_wildcards(
"resources/{file_name}.json"
).file_name
return expand(
"results/parsed_files/{file_name}.json",
file_name=file_names,
)
# Rule has checkpoint dependency: Only after the checkpoint is executed
# the rule is executed which then evaluates the function to determine all
# json files downloaded from the git repo
rule aggregate:
input:
get_all_json_file_names
output:
"result.json",
default_target: True
shell:
# TODO: Action which combines all JSON files
edit: Moved the expand(...) from rule aggregate into get_all_json_file_names.

define SAMPLE for different dir name and sample name in snakemake code

I have written a snakemake code to run bwa_map. Fastq files are with different folder name and different sample name (paired end). It shows error as 'SAMPLES' is not defined. Please help.
Error:
$snakemake --snakefile rnaseq.smk mapped_reads/EZ-123-B_IGO_08138_J_2_S101_R2_001.bam -np
*NameError in line 2 of /Users/singhh5/Desktop/tutorial/rnaseq.smk:
name 'SAMPLES' is not defined
File "/Users/singhh5/Desktop/tutorial/rnaseq.smk", line 2, in *
#SAMPLE DIRECTORY
fastq
Sample_EZ-123-B_IGO_08138_J_2
EZ-123-B_IGO_08138_J_2_S101_R1_001.fastq.gz
EZ-123-B_IGO_08138_J_2_S101_R2_001.fastq.gz
Sample_EZ-123-B_IGO_08138_J_4
EZ-124-B_IGO_08138_J_4_S29_R1_001.fastq.gz
EZ-124-B_IGO_08138_J_4_S29_R2_001.fastq.gz
#My Code
expand("~/Desktop/{sample}/{rep}.fastq.gz", sample=SAMPLES)
rule bwa_map:
input:
"data/genome.fa",
"fastq/{sample}/{rep}.fastq"
conda:
"env.yaml"
output:
"mapped_reads/{rep}.bam"
threads: 8
shell:
"bwa mem {input} | samtools view -Sb -> {output}"
The specific error you are seeing is because the variable SAMPLES isn't set to anything before you use it in expand.
Some other issues you may run into:
Output file is missing the {sample} wildcard.
The value of threads isn't passed into bwa or samtools
You should place your expand into the input directive of the first rule in your snakefile, typically called all to properly request the files from bwa_map.
You aren't pairing your reads (R1 and R2) in bwa.
You should look around stackoverflow or some github projects for similar rules to give you inspiration on how to do this mapping.

java.io.FileNotFoundException when running datomic ensure-transactor

I'm trying to learn datomic, and finding that the datomic setup and provisioning process has a very high learning curve.
One bizarre problem that I'm having -- which I'm hoping is due to some stupid mistake -- is that when I try to run datomic ensure-transactor I get a file not found error for the properties file.
You'll have to take my word for it that the file exists. I've even opened up all the permissions on the file in case this was a permissions problem.
My properties file looks like this (with license redacted etc) -- I'm attempting to provision a setup for a local instance of dynamodb. I've also installed dyanmodb-local using brew (brew install dynamodb-local):
################################################################
protocol=ddb-local
host=localhost
port=4334
################################################################
# See http://docs.datomic.com/storage.html
license-key=[license here]
################################################################
# See http://docs.datomic.com/storage.html
# DynamoDB storage settings
aws-dynamodb-table=datemo
# See http://docs.amazonwebservices.com/general/latest/gr/rande.html#ddb_region
# aws-dynamodb-region=us-east-1
# To use DynamoDB Local, change the protocol (above) to ddb-local.
# Comment out aws-dynamodb-region, and instead use aws-dynamodb-override-endpoint
aws-dynamodb-override-endpoint=localhost:8080
################################################################
# See http://docs.datomic.com/storage.html
# This role has read and write access to storage and
# is used by the transactor to read and write data. Optionally,
# this role also has write access to an S3 bucket used for log
# storage and to CloudWatch for metrics, if those features are
# enabled.
# (Can be auto-generated by bin/datomic ensure-transactor.)
aws-transactor-role=
################################################################
# See http://docs.datomic.com/storage.html
# This role has read-only access to storage and
# is used by peers to read data.
# (Can be auto-generated by bin/datomic ensure-transactor.)
aws-peer-role=
################################################################
# See http://docs.datomic.com/capacity.html
# Recommended settings for -Xmx4g production usage.
# memory-index-threshold=32m
# memory-index-max=512m
# object-cache-max=1g
# Recommended settings for -Xmx1g usage, e.g. dev laptops.
memory-index-threshold=32m
memory-index-max=256m
object-cache-max=128m
## OPTIONAL ####################################################
# Set to false to disable SSL between the peers and the transactor.
# Default: true
# encrypt-channel=true
# Data directory is used for dev: and free: storage, and
# as a temporary directory for all storages.
# data-dir=data
# Transactor will log here, see bin/logback.xml to configure logging.
# log-dir=log
# Transactor will write process pid here on startup
# pid-file=transactor.pid
## OPTIONAL ####################################################
# See http://docs.datomic.com/storage.html
# Memcached configuration.
# memcached=host:port,host:port,...
# memcached-username=datomic
# memcached-password=datomic
## OPTIONAL ####################################################
# See http://docs.datomic.com/capacity.html
# Soft limit on the number of concurrent writes to storage.
# Default: 4, Miniumum: 2
# write-concurrency=4
# Soft limit on the number of concurrent reads to storage.
# Default: 2 times write-concurrency, Miniumum: 2
# read-concurrency=8
## OPTIONAL ####################################################
# See http://docs.datomic.com/aws.html
# Optional settings for rotating logs to S3
# (Can be auto-generated by bin/datomic ensure-transactor.)
# aws-s3-log-bucket-id=
## OPTIONAL ####################################################
# See http://docs.datomic.com/aws.html
# Optional settings for Cloudwatch metrics.
# (Can be auto-generated by bin/datomic ensure-transactor.)
# aws-cloudwatch-region=
# Pick a unique name to distinguish transactor metrics from different systems.
# aws-cloudwatch-dimension-value=your-system-name
## OPTIONAL ####################################################
# See http://docs.datomic.com/ha.html
# The transactor will write a heartbeat into storage on this interval.
# A standby transactor will take over if it sees the heartbeat go
# unwritten for 2x this interval. If your transactor load leads to
# long gc pauses, you can increase this number to prevent the standby
# transactor from unnecessarily taking over during a long gc pause.
# Default: 5000, Miniumum: 5000
# heartbeat-interval-msec=5000
## OPTIONAL ####################################################
# The transactor will use this partition for new entities that
# do not explicitly specify a partition.
# Default: :db.part/user
# default-partition=:db.part/user
If there's anyone out there that could set me straight, it'd be much appreciated.
The answer here turns out to be that you must run ensure-transactor from the root director of the datomic package. This is apparently true for most of (but not all) datomic scripts. It seems to pertain in particular to one-off scripts like ensure-transactor that you'd run once.

R script or batch file to automatically install latest software build

I'm trying to create a script that will automatically install software updates. There are regularly new software builds that are created in a known directory and have their own folders. In either R or a batch file that I can execute in an R script, I'd like to write a script that will check to see if any new builds have been created within the past 24 hours. I was wondering if there was a way to use the "Date modified" column in windows explorer to make that check. So if today is 06/27/2016, the script would look for any folders that have a "Date modified" after "Current Date" - 24 hours. Then it would navigate to that folder and execute the executable file and auto-install.
Is this even feasible?
EDIT: r2evans provided a very thorough answer to most of my question but I wanted to make an update to show how far this brings me.
This is the way I've implemented his answer.
path <- "//Path/to/Folders/"
df <- file.info(list.files(path, full.names = TRUE))
df <- rownames(df)[ df$mtime > Sys.time() - 60*60*24 ]
This would work perfectly if I had a single directory of files I was looking for. However, I need this analyzes a series of subdirectories that have the executable (.msi) files in them. So when I print df after it has been modified to look for what has changed in the last 24 hours, it only gives me the name of a folder.
//Path/to/Folders/FolderName
I was thinking about using Shell instead of system2 to execute the file. I execute several batch files in my full script using Shell like this: shell(paste(shQuote("\\\\SERVER\\d$SERVER\\Path\\to\\the\\folder\\file.bat")), "cmd")
I tried implementing that like this: shell(paste(shQuote("\\\\SERVER\\d$SERVER\\Path\\to\\the\\folder\\dffolderName\\*")), "cmd") but this gave this error:
CMD.EXE was started with the above path as the current directory.
UNC paths are not supported. Defaulting to Windows directory.
The network name cannot be found.
Warning messages:
1: running command 'cmd /c "\\SERVER\d$\SERVER\Path\to\dffolderName\*"' had status 1
2: In shell(paste(shQuote("\\\\SERVER\\d$SERVER\\Path\\to\\the\\folder\\dffolderName\\*")), :
'"\\SERVER\d$\SERVER\Path\to\dffolderName\*"' execution failed with error code 1
Additionally, importantly, how would I add in the df variable to Shell so that path becomes variable?
EDIT 2: Here's a working example of my shell cmd line.
I created a test batch file to execute. This batch file should do all the relevant things that are being done in the R script (i.e. pulling variables in and running from a remote server).
#ECHO off
set var1=%1
set var2=%2
ECHO %var1%
ECHO %var2%
ECHO This batch file is working
pause
Then try this R script:
var1 <- 1
var2 <- 2
shell(paste(shQuote("\\\\SERVER\\d$\\SERVER\\Path\\To\\the\\File\\TestBat.bat"), var1, var2), "cmd")
You can find which files within a directory have been updated recently with something like this:
path <- "path/to/files/"
df <- file.info(list.files(path, full.names = TRUE))
df is just a data.frame with some basic fields on the files. For a particular path on my machine (with docker stuff for confluent-platform):
str(df)
# 'data.frame': 5 obs. of 7 variables:
# $ size : num 0 1434 0 0 0
# $ isdir: logi TRUE FALSE TRUE TRUE TRUE
# $ mode :Class 'octmode' int [1:5] 511 438 511 511 511
# $ mtime: POSIXct, format: "2016-06-15 08:01:41" "2016-06-27 10:07:12" ...
# $ ctime: POSIXct, format: "2016-06-15 08:00:22" "2016-06-15 08:49:39" ...
# $ atime: POSIXct, format: "2016-06-15 08:01:41" "2016-06-27 09:15:55" ...
# $ exe : chr "no" "no" "no" "no" ...
The file names themselves are the row names, so you can either access them directly with:
rownames(df)
# [1] "C:\\Users\\r2/Projects/kafka/confluent-3.0.0"
# [2] "C:\\Users\\r2/Projects/kafka/docker-kafka-notes"
# [3] "C:\\Users\\r2/Projects/kafka/kafka_2.11-0.10.0.0"
# [4] "C:\\Users\\r2/Projects/kafka/tmp"
# [5] "C:\\Users\\r2/Projects/kafka/zookeeper-3.4.8"
From here, it's simple enough to filter on the mtime (modification time):
rownames(df)[ df$mtime > Sys.time() - 60*60*24 ]
# [1] "C:\\Users\\r2/Projects/kafka/docker-kafka-notes"
(I have only modified that one file in the last day.)
If your next step (actually updating the application) is good with a full path, then all is good. If not and you need to remove the leading stuff, there are two easy methods for doing that:
Change into the directory first with setwd("/path/to/dir") and then df <- file.info(list.files()); or
Run basename(...) on the returned (filtered) file names, returning the filename without its leading path.
I prefer the latter (not wanting to rely on or change the current working directory), but that's just personal preference.
Now actually effecting the update is a different issue. If you're simply going to "run the updated application", then I recommend looking into ?system2.

Why "Reference to undefined global `Moduletest'" in OCaml?

I wrote
let fact x =
let result = ref 1 in
for i = 1 to x do
result := !result * i;
Printf.printf "%d %d %d\n" x i !result;
done;
!result;;
in a file named "Moduletest.ml", and
val fact : int -> int
in a file named "Moduletest.mli".
But, why don't they work?
When I tried to use in ocaml,
Moduletest.fact 3
it told me:
Error: Reference to undefined global `Moduletest'
What's happening?
Thanks.
OCaml toplevel is linked only with a standard library. There're several options on how to make other code visible to it:
copy-pasting
evaluating from the editor
loading files #use directive
making custom toplevel
loading with ocamlfind
Copy-pasting
This self-describing, you just copy code from some source and paste it into toplevel. Don't forget that toplevel won't evaluate your code until you add ;;
Evaluating from the editor
Where the editor is of course Emacs... Well, indeed it can be any other capable editor, like vim for example. This method is an elaboration of the previous, where the editor is actually responsible for copying and pasting the code for you. In Emacs you can evaluate the whole file with C-c C-b command, or you can narrow it to a selected area with C-c C-r, and the most granular is to use C-c C-e, i.e., evaluate an expression. Although it is slightly buggy.
Loading with #use directive.
This directive accepts a filename, and it will essentially copy and paste the code from the file. Notice, that it won't create a file-module for you/ For example, if you have file test.ml with this contents:
(* file test.ml *)
let sum x y = x + y
then loading it with the #use directive, will actually bring to your scope, sum value:
# #use "test.ml";;
# let z = sum 2 2
You mustn't to qualify sum with Test., because no Test module is actually created. #use directive merely copies the contents of the file to the toplevel. Nothing more.
Making custom toplevels
You can create your own toplevel with your code compiled in. It is an advanced theme, so I will skip it.
Loading libraries with ocamlfind
ocamlfind is a tool that allows you to find and load libraries, installed on your system, into your toplevel. By default, toplevel is not linked with any code except standard library. Even, not all parts of the library are actually linked, e.g., Unix module is not available, and needed to be loaded explicitly. There're primitive directives that can load any library, like #load and #include, but they are not for a casual user, especially if you have excellent ocamlfind at your disposal. Before using it, you need to load it, since it is also not available by default. The following command, will load ocamlfind and add few new directives:
# #use "topfind";;
In a process of loading it will show you a little hint on how to use it. The most interesting directive, that is added is #require. It accepts a library name, and loads (i.e., links) its code into toplevel:
# #require "unix";;
This will load a unix library. If you're not sure, about the name of the library you can always view all libraries with a #list command. The #require directive is clever and it will automatically load all dependencies of the library.
If you do not want to type all this directives every time you start OCaml top-level, then you cam create .ocamlinit file in your home directory, and put them there. This file will be loaded automatically on a top-level startup.
I have tested your code and it looks fine. You should "load" it from the OCaml toplevel (launched from the same directory as your .ml and .mli files) in the following way:
# #use "Moduletest.ml";;
val fact : int -> int = <fun>
# fact 4;;
4 1 1
4 2 2
4 3 6
4 4 24
- : int = 24

Resources