BLIF outputed by yosys involves DFF, and the BLIF file cannot be read by ABC - abc

I'm new to yosys and ABC for logic synthesis. I downloaded one design aes_core from opencores, and run the following script with yosys to map the design into blif:
read_verilog ./designs/apbtoaes128/trunk/rtl/*.v
hierarchy -check -top aes_core
proc
techmap -map ./oss-cad-suite/share/yosys/adff2dff.v
synth
dfflibmap -prepare ./yosys-yosys-0.23/manual/PRESENTATION_Intro/mycells.lib
abc -liberty ./yosys-yosys-0.23/manual/PRESENTATION_Intro/mycells.lib
dfflibmap -liberty ./yosys-yosys-0.23/manual/PRESENTATION_Intro/mycells.lib
write_blif -gates ./designs/aes_core.blif
After this, the blif only contains five types of gates (BUF, NOT, NAND, NOR, DFF); one snippet of the blif file is as follows:
...
.gate DFF C=clk D=$auto$rtlil.cc:2560:MuxGate$25762 Q=rd_count[0]
.gate DFF C=clk D=$auto$rtlil.cc:2560:MuxGate$25766 Q=rd_count[1]
.gate DFF C=clk D=$auto$rtlil.cc:2560:MuxGate$25770 Q=rd_count[2]
.gate DFF C=clk D=$auto$rtlil.cc:2560:MuxGate$25774 Q=rd_count[3]
.gate DFF C=clk D=$abc$11428$auto$fsm_map.cc:170:map_fsm$2040[0] Q=state[0]
.gate DFF C=clk D=$abc$11428$auto$fsm_map.cc:170:map_fsm$2040[1] Q=state[1]
.gate DFF C=clk D=$abc$11428$auto$fsm_map.cc:170:map_fsm$2040[2] Q=state[2]
.gate DFF C=clk D=$abc$11428$auto$fsm_map.cc:118:implement_pattern_cache$2077 Q=state[3]
.gate DFF C=clk D=$abc$11428$auto$fsm_map.cc:170:map_fsm$2040[4] Q=state[4]
...
At last, I wish to use ABC to read the blif file, the script I used with ABC is:
read ./yosys-yosys-0.23/manual/PRESENTATION_Intro/mycells.lib
read_blif ./designs/aes_core.blif
And the output is:
Generic file reader requires a known file extension to open "./yosys-yosys-0.23/manual/PRESENTATION_Intro/mycells.h".
Line 393: Cannot find gate "DFF" in the library.
Reading network from file has failed.
It seems that when I read the cell library in ABC, the sequential gate is skipped, and I wonder the reason of this and how can we fix this issue.

Related

CSV file output not well aligned with "read.csv()"

I run the "R code" in "RKWard" to read a CSV file:
# I) Go to the working directory
setwd("/home/***/Desktop/***")
# II) Verify the current working directory
print(getwd())
# III) Load te nedded package
require("csv")
# IV) Read the desired file
read.csv(file="CSV_Example.csv", header=TRUE, sep=";")
The data in CSV file is as follow (an example token from this website):
id,name,salary,start_date,dept
1,Rick,623.3,2012-01-01,IT
2,Dan,515.2,2013-09-23,Operations
3,Michelle,611,2014-11-15,IT
4,Ryan,729,2014-05-11,HR
5,Gary,843.25,2015-03-27,Finance
6,Nina,578,2013-05-21,IT
7,Simon,632.8,2013-07-30,Operations
8,Guru,722.5,2014-06-17,Finance
But the result is as follow:
id.name.salary.start_date.dept
1 1,Rick,623.3,2012-01-01,IT
2 2,Dan,515.2,2013-09-23,Operations
3 3,Michelle,611,2014-11-15,IT
4 4,Ryan,729,2014-05-11,HR
5 5,Gary,843.25,2015-03-27,Finance
6 6,Nina,578,2013-05-21,IT
7 7,Simon,632.8,2013-07-30,Operations
8 8,Guru,722.5,2014-06-17,Finance
PROBLEM: The datas are not aligned as supposed to be.
Please can anyone help me

merge different files into 1 text file in R

I have two files with one being text, and the other being a data frame, now I just want to merge them into one as a text file. With linux, I can use:
cat file1 file2 > outputfile
I wonder if we can do the same thing with R?
file1
##TITLE=POOLED SAMPLES COLLECTED 05-06/03/2018
##JCAMP-DX=4.24
##DATA TYPE=LINK
#ORIGIN Bentley_FTS SomaCount_FCM 82048
##OWNER=Bentley Instruments Inc
##DATE=2018-03-09
##TIME=15:34:48
##BLOCKS=110
##LAB1=Auto Generated
##LAB2=
##CHANNELNAMES=8
file 2:
649.025085449219 0.063037105 0.021338785 -0.00053594 0.008937807 0.03266982
667.675231457819 0.028557044 0.005877694 -0.015043681 0.014945094 0.051547796
686.325377466418 0.021499421 0.017125281 0.043007832 0.04132269 0.027496092
704.975523475018 0.006128653 -0.014599532 -0.000335723 0.020189898 0.024547976
723.625669483618 0.018550962 0.018567896 0.014100821 0.013067127 0.027075281
742.275815492218 0.030145327 0.039745297 0.050556265 0.056621946 0.058416516
760.925961500818 0.040279277 0.01392867 -0.00143011 0.015103153 0.03580305
779.576107509418 0.031955898 0.013671243 0.000861743 0.000641993 0.001747168
Thanks alot
Phuong
We can use file.append:
file.append("fileMerged.txt", "file1.txt")
file.append("fileMerged.txt", "file2.txt")
Or if files are already imported into R, then write with append:
#import to R
f1 <- readLines("file1.txt")
f2 <- readLines("file2.txt")
# output with append
write(f1, "fileMerged.txt")
write(f2, "fileMerged.txt", append = TRUE)

Error when using RNetlogo to start Netlogo

I have built a simulation model in Netlogo and hope to optimize model parameters (around 30). Since Netlogo does not support automate multiple runs with different parameter sets, I was thinking using another platform (R/python/Java) to call Netlogo, analyze the simulated results, and find the optimal parameters.
However, none of them work so far...In R, I have encountered error when starting Netlogo using RNetLogo. I have tried all the potential solutions I can find online, but still haven't figured out the issue. Would appreciate it if someone can help.
Code:
library(RNetLogo)
nl.path = "C:/Program Files/NetLogo 5.3/app"
NLStart(nl.path, gui=FALSE, nl.jarname = 'NetLogo.jar')
Error message:
java.lang.NoClassDefFoundError: org/nlogo/api/Exceptions$Handler
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Unknown Source)
Caused by: java.lang.ClassNotFoundException
at RJavaClassLoader.findClass(RJavaClassLoader.java:383)
at java.lang.ClassLoader.loadClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
... 2 more
Version:
- system: Windows 10
- R: 3.3.3
- Netlogo: 5.3/5.3.1/6.0/6.0.2 (tried all of them, same error message)
- Java: 1.8.0_151-b12 (this is the one called in R, checked by .jcall)
- RNetlogo and rJava are most up-to-date as of 1/9/2018
Does it solve your problem, if you use the java environment included in the NetLogo installation?
Try this one before you install or load RNetLogo inside R:
Sys.setenv(JAVA_HOME="YOUR-INSTALLATION-PATH\\NetLogo 6.0.1\\runtime")
I'm definitely stuck on that same error with 5.3.1. Out of curiosity, does this work for you with version 6.0.2?
library(RNetLogo)
nl.path <- "C:/Program Files/NetLogo 6.0.2/app"
NLStart(nl.path, gui = FALSE, nl.jarname = "netlogo-6.0.2.jar")
There are a few diagnostics to run, but before that, copy all the .jar files from the Java/ folder in your Netlogo/ folder to the same folder where NetLogo 5.3.0.app is located.
Then make sure you have both /app in the dir path and the NL version in the nl.jarname argument.
# get packages from source
p<-c("rJava", "RNetLogo"); remove.packages(p)
install.packages("rJava", repos = "https://cran.r-project.org/", type="source")
install.packages("RNetLogo", repos = "https://cran.r-project.org/", type="source")
library(rJava);library(RNetLogo)
nl_path = "C:/Program Files/NetLogo 5.3.0/app"
ver_nl <- "5.3.0"
NLStart(nl_path,gui=F,nl.jarname = paste0("netlogo-",ver_nl,".jar")) # open netlogo without gui
# NLLoadModel("path-to-nl-model",nl.obj=NULL) # load model with nl.obj=NULL
Failing this, run these diagnostics in order. You should have no rJava errors. If you do, ensure you have the latest version of Java SE installed:
https://www.oracle.com/technetwork/java/javase/downloads/index-jsp-138363.html
# test 1
install.packages("rJava", repos = "https://cran.r-project.org/", type="source"); library(rJava)
.jinit()
.jnew( "java/awt/Point", 10L, 10L )
f <- .jnew("java/awt/Frame","Hello")
.jcall(f,,"setVisible",TRUE)
t1err <- geterrmessage()
# test 2
component <- .jnull()
component <- .jcast(component, new.class = "java/awt/Component")
message <- .jnew("java/lang/String","This is a JOptionPane test from rJava.")
message <- .jcast(message, new.class = "java/lang/Object")
title <- .jnew("java/lang/String","Test")
type <- .jnew("java/lang/Integer", as.integer(2))
f <- .jnew("javax/swing/JOptionPane")
.jcall(f,"showMessageDialog", component, message, title, .jsimplify(type))
t2err <- geterrmessage()
# # test 3
.jcall("java/lang/System", "S", "getProperty", "java.vm.version")
.jcall("java/lang/System", "S", "getProperty", "java.vm.name")
.jcall("java/lang/System", "S", "getProperty", "java.vm.info")
.jcall("java/lang/System", "S", "getProperty", "java.runtime.version")
.jcall("java/lang/System", "S", "getProperty", "sun.arch.data.model")
t3err <- geterrmessage()
# test 4
.jcall("java/lang/System", "S", "getProperty", "java.awt.headless")
Sys.getenv("NOAWT")
t4err <- geterrmessage()
errorlist <- function(){
if(geterrmessage()==t1err){stop("Failed Test 1 — Headless exception \n \n Wrong awt GUI support for Java/rJava",call.=T)}
if(geterrmessage()==t2err){stop("Failed Test 2 — Invalid method name for RcallMethod (unable to open dialog box)",call.=T)}
if(geterrmessage()==t3err){stop("Failed Test 3 — Old version of Java. \n \n Download latest version \n \n > https://www.oracle.com/technetwork/java/javase/downloads/index-jsp-138363.html.",call.=T)}
if(geterrmessage()==t4err){stop("Failed Test 4 — ",call.=T)}
}
Final alternative is to run RNetLogo from JGR, which works in headless mode for Mac.
install.packages('JGR',,'http://www.rforge.net/') # note the extra comma
library(JGR)
JGR::JGR()

How to read unquoted extra \r with data.table::fread

Data I have to process has unquoted text with some additional \r character. Files are big (500MB), copious (>600), and changing the export is not an option. Data might look like
A,B,C
blah,a,1
bloo,a\r,b
blee,c,d
How can this be handled with data.table's fread?
Is there a better R read CSV function for this, that's similarly performant?
Repro
library(data.table)
csv<-"A,B,C\r\n
blah,a,1\r\n
bloo,a\r,b\r\n
blee,c,d\r\n"
fread(csv)
Error in fread(csv) :
Expected sep (',') but new line, EOF (or other non printing character) ends field 1 when detecting types from point 0:
bloo,a
Advanced repro
The simple repro might be too trivial to give a sense of scale...
samplerecs<-c("blah,a,1","bloo,a\r,b","blee,c,d")
randomcsv<-paste0(c("A,B,C",rep(samplerecs,2000000)))
write(randomcsv,file = "sample.csv")
# Naive approach
fread("sample.csv")
# Akrun's approach with needing text read first
fread(gsub("\r\n|\r", "", paste0(randomcsv,collapse="\r\n")))
#>Error in file.info(input) : file name conversion problem -- name too long?
# Julia's approach with needing text read first
readr::read_csv(gsub("\r\n|\r", "", paste0(randomcsv,collapse="\r\n")))
#> Error: C stack usage 48029706 is too close to the limit
Further to #dirk-eddelbuettel & #nrussell's suggestions, a way of solving this is to is to pre-process the file. The processor could also be called within fread() but here it is performed in seperate steps:
samplerecs<-c("blah,a,1","bloo,a\r,b","blee,c,d")
randomcsv<-paste0(c("A,B,C",rep(samplerecs,2000000)))
write(randomcsv,file = "sample.csv")
# Remove errant `\r`'s with tr - shown here is the Windows R solution
shell("C:/Rtools/bin/tr.exe -d '\\r' < sample.csv > sampleNEW.csv")
fread("sampleNEW.csv")
We can try with gsub
fread(gsub("\r\n|\r", "", csv))
# A B C
#1: blah a 1
#2: bloo a b
#3: blee c d
You can also do this with tidyverse packages, if you'd like.
> library(readr)
> library(stringr)
> read_csv(str_replace_all(csv, "\r", ""))
# A tibble: 3 × 3
A B C
<chr> <chr> <chr>
1 blah a 1
2 bloo a b
3 blee c d
If you do want to do it purely in R, you could try working with connections. As long as a connection is kept open, it will start reading/writing from its previous position. Of course, this means the burden of opening and closing connections falls on you.
In the following code, the file is processed by chunks:
library(data.table)
input_csv <- "sample.csv"
in_conn <- file(input_csv)
output_csv <- "out.csv"
out_conn <- file(output_csv, "w+")
open(in_conn)
chunk_size <- 1E6
return_pattern <- "(?<=^|,|\n)([^,]*(?<!\n)\r(?!\n)[^,]*)(?=,|\n|$)"
buffer <- ""
repeat {
new_chars <- readChar(in_conn, chunk_size)
buffer <- paste0(buffer, new_chars)
while (grepl("[\r\n]$", buffer, perl = TRUE)) {
next_char <- readChar(in_conn, 1)
buffer <- paste0(buffer, next_char)
if (!length(next_char))
break
}
chunk <- gsub("(.*)[,\n][^,\n]*$", "\\1", buffer, perl = TRUE)
buffer <- substr(buffer, nchar(chunk) + 1, nchar(buffer))
cleaned <- gsub(return_pattern, '"\\1"', chunk, perl = TRUE)
writeChar(cleaned, out_conn, eos = NULL)
if (!length(new_chars))
break
}
writeChar('\n', out_conn, eos = NULL)
close(in_conn)
close(out_conn)
result <- fread(output_csv)
Process:
If a chunk ends with a \r or \n, another character is added until it doesn't.
Quotes are put around values containing a \r which isn't adjacent to a
\n.
The cleaned chunk is added to the end of another file.
Rinse and repeat.
This code simplifies the problem by assuming no quoting is done for any field in sample.csv. It's not especially fast, but not terribly slow. Larger values for chunk_size should reduce the amount of time spent in I/O operations. If used for anything beyond this toy example, I'd strongly suggesting wrapping it in a tryCatch(...) call to make sure the files are closed afterwards.

debugging mapreduce() function in R

Today I started working on rhdfs and rmr2 packages.
mapreduce() function on a 1D vector worked well as expected.
piece of code on 1D vector
a1 <- to.dfs(1:20)
a2 <- mapreduce(input=a1, map=function(k,v) keyval(v, v^2))
a3 <- as.data.frame(from.dfs(a2())
It returns following dataframe
Key Val
1 1 1
2 10 100
3 11 121
4 12 144
5 13 169
6 14 196
7 15 225
8 16 256
9 17 289
10 18 324
11 19 361
12 2 4
13 20 400
14 3 9
15 4 16
16 5 25
17 6 36
18 7 49
19 8 64
20 9 81
Till now, it was fine.
But, While working on mapreduce function on mtcars dataset, I got the following error message. Unable to debug it further. Kindly give some clue to move ahead.
My piece of code :
rs1 <- mapreduce(input=mtcars,
map=function(k, v) {
if (mtcars$hp > 150) keyval("Bigger", 1) },
reduce=function(k, v) keyval(k, sum(v))
)
Error Message with the above piece of code.
13/09/21 07:24:49 ERROR streaming.StreamJob: Missing required option: input
Usage: $HADOOP_HOME/bin/hadoop jar \
$HADOOP_HOME/hadoop-streaming.jar [options]
Options:
-input <path> DFS input file(s) for the Map step
-output <path> DFS output directory for the Reduce step
-mapper <cmd|JavaClassName> The streaming command to run
-combiner <cmd|JavaClassName> The streaming command to run
-reducer <cmd|JavaClassName> The streaming command to run
-file <file> File/dir to be shipped in the Job jar file
-inputformat TextInputFormat(default)|SequenceFileAsTextInputFormat|JavaClassName Optional.
-outputformat TextOutputFormat(default)|JavaClassName Optional.
-partitioner JavaClassName Optional.
-numReduceTasks <num> Optional.
-inputreader <spec> Optional.
-cmdenv <n>=<v> Optional. Pass env.var to streaming commands
-mapdebug <path> Optional. To run this script when a map task fails
-reducedebug <path> Optional. To run this script when a reduce task fails
-io <identifier> Optional.
-verbose
Generic options supported are
-conf <configuration file> specify an application configuration file
-D <property=value> use value for given property
-fs <local|namenode:port> specify a namenode
-jt <local|jobtracker:port> specify a job tracker
-files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
-libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
-archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.
The general command line syntax is
bin/hadoop command [genericOptions] [commandOptions]
For more details about these options:
Use $HADOOP_HOME/bin/hadoop jar build/hadoop-streaming.jar -info
Streaming Command Failed!
Error in mr(map = map, reduce = reduce, combine = combine, vectorized.reduce, :
hadoop streaming failed with error code 1
Quick and detailed responses are highly appreciated...
The data which you are passing for Keyval,thinks it as vector, its not single entity. Try to interpret from below code.
Trying locally
loading data
data(mtcars)
view few data lines
head(mtcars)
hpTest=mtcars$hp # taking required data
print(hpTest)
Final sum
sum(hpTest[which(hpTest>150)]) # 2804
Running on Hadoop-MapReduce
exporting env variables
# requied
Sys.setenv(HADOOP_HOME="/home/trendwise/apache/hadoop-1.0.4");
Sys.setenv(HADOOP_CMD="/home/trendwise/apache/hadoop-1.0.4/bin/hadoop");
#optional
Sys.setenv(HADOOP_BIN="/home/trendwise/apache/hadoop-1.0.4/bin");
Sys.setenv(HADOOP_CONF_DIR="/home/trendwise/apache/hadoop-1.0.4/conf");
Sys.setenv(HADOOP_STREAMING='/home/trendwise/apache/hadoop-1.0.4/contrib/streaming/hadoop-streaming-1.0.4.jar')
Sys.setenv(LD_LIBRARY_PATH="/lib:/lib/x86_64-linux-gnu:/lib64:/usr/lib:/usr/lib64:/usr/local/lib:/usr/local/lib64:/usr/lib/jvm/jdk1.7.0_10/lib:/usr/lib/jvm/jdk1.7.0_10/jre/lib:/usr/lib/jvm/jdk1.7.0_10/jre/lib/amd64:/usr/lib/jvm/jdk1.7.0_10/jre/lib/amd64/server");
Loading library
library(rmr2)
library(rhdfs)
initializing
hdfs.init()
putting input into HDFS
hpInput = to.dfs(mtcars$hp)
running MapReduce
mapReduceResult <- mapreduce(input=hpInput,
map=function(k, v) { keyval( rep(1,length(which(inputData > 150))) ,v[which(v>150)] )} ,
reduce=function(k2, v2){ keyval(k2, sum(v2))}
viewing MR output
from.dfs(mapReduceResult)
output
$key
[1] 1
$val
[1] 2804
You can use built-in debugging functionality in the newest RStudio. Just rewrite you code in local MR manner

Resources