How to inspect the explicit dependencies of a patch in darcs? - patch

As we know, explicit patch dependencies can be recorded by darcs record --ask-deps. (A use I see for this is preventing situations where "It's easy to move a patch that uses a feature to a point before the feature is introduced.".)
So, having a repo where I should have made such deps, I want to check whether it's true. How do I inspect the recorded explicit dependencies of a selected patch?
Google could find me some code in Darcs/UI/Commands/Rebase.hs which prints a warning if a patch had such deps, but I don't know yet if there is a stand-alone command that would just give this information (not coupled to an action):
where doAdd :: (RepoPatch p, ApplyState p ~ Tree)
=> Repository (Rebasing p) wR wU wT
-> FL (WDDNamed p) wT wT2
-> HijackT IO (Repository (Rebasing p) wR wU wT2, FL (RebaseName p) wT2 wT2)
doAdd repo NilFL = return (repo, NilFL)
doAdd repo ((p :: WDDNamed p wT wU) :>:ps) = do
case wddDependedOn p of
[] -> return ()
deps -> liftIO $ do
-- It might make sense to only print out this message once, but we might find
-- that the dropped dependencies are interspersed with other output,
-- e.g. if running with --ask-deps
putStr $ "Warning: dropping the following explicit "
++ englishNum (length deps) (Noun "dependency") ":\n\n"
let printIndented n =
mapM_ (putStrLn . (replicate n ' '++)) . lines .
renderString Encode . showPatchInfo
putStrLn . renderString Encode . showPatchInfo .
patch2patchinfo $ wddPatch p
putStr " depended on:\n"
mapM_ (printIndented 2) deps
putStr "\n"
...
Perhaps, a command that outputs a .dpatch would include this information in the dpatch. I should check this now.
Neither darcs log -v (http://bugs.darcs.net/issue959) nor darcs diff outputs this information according to my experiments.

One way is to output a .dpatch with darcs send, and look into it.
It's not a very convenient way, because
darcs send needs a target repo (even with -o FILE.dpatch);
several patches get into the bundle, instead of the single one we want to inspect...
Here is an example (I've also checked that darcs log -v doesn't give the information about the explicit dependencies):
Preparation:
$ mkdir test-darcs-deps
$ cd test-darcs-deps/
$ darcs init
$ echo a > a
$ darcs add a
$ darcs rec -m A
$ echo b > b
$ darcs add b
$ darcs rec -m B
Recording the explicit dependency:
$ echo b2 > b
$ darcs rec --ask-deps
hunk ./b 1
-b
+b2
Shall I record this change? (1/1) [ynW...], or ? for more options: y
Do you want to record these changes? [Yglqk...], or ? for more options: y
patch 1f59d082f61f1fb8d57f5f5199869d1fc21b2435
Author: Ivan Zakharyaschev <imz#altlinux.org>
Date: Sat Jan 9 01:28:07 MSK 2016
* A
Shall I depend on this patch? (1/1) [ynW...], or ? for more options: y
Do you want to depend on these patches? [Yglqk...], or ? for more options: y
Finished recording patch 'B2'
Inspecting the deps (I had to refer to a non-related darcs repo in
order for darcs send to work!):
$ darcs send -o ../test-darcs-deps.dpatch
Missing argument: [REPOSITORY]
Usage: darcs send [OPTION]... [REPOSITORY]
Prepare a bundle of patches to be applied to some target repository.
See darcs help send for details.
$ darcs send -o ../test-darcs-deps.dpatch ../test-darcs
HINT: if you want to change the default remote repository to
/home/imz/tests/test-darcs,
quit now and issue the same command with the --set-default flag.
patch 1f59d082f61f1fb8d57f5f5199869d1fc21b2435
Author: Ivan Zakharyaschev <imz#altlinux.org>
Date: Sat Jan 9 01:28:07 MSK 2016
* A
A ./a
Shall I send this patch? (1/3) [ynW...], or ? for more options: w
patch 151c8321b2bc36df7ba09dcab0d17c853ed31577
Author: Ivan Zakharyaschev <imz#altlinux.org>
Date: Sat Jan 9 01:28:32 MSK 2016
* B
A ./b
Shall I send this patch? (2/3) [ynW...], or ? for more options: w
patch f697ac56e8241f5a906c010650b683638944ebf2
Author: Ivan Zakharyaschev <imz#altlinux.org>
Date: Sat Jan 9 01:28:50 MSK 2016
* B2
M ./b -1 +1
Shall I send this patch? (3/3) [ynW...], or ? for more options: y
Do you want to send these patches? [Yglqk...], or ? for more options: l
---- Already selected patches ----
patch 151c8321b2bc36df7ba09dcab0d17c853ed31577
Author: Ivan Zakharyaschev <imz#altlinux.org>
Date: Sat Jan 9 01:28:32 MSK 2016
* B
A ./b
patch 1f59d082f61f1fb8d57f5f5199869d1fc21b2435
Author: Ivan Zakharyaschev <imz#altlinux.org>
Date: Sat Jan 9 01:28:07 MSK 2016
* A
A ./a
patch f697ac56e8241f5a906c010650b683638944ebf2
Author: Ivan Zakharyaschev <imz#altlinux.org>
Date: Sat Jan 9 01:28:50 MSK 2016
* B2
M ./b -1 +1
---- end of already selected patches ----
Do you want to send these patches? [Yglqk...], or ? for more options: y
Minimizing context, to send with full context hit ctrl-C...
File content did not change. Continue anyway? [yn]y
The dependency can be seen in angle brackets in the output:
[B2
Ivan Zakharyaschev <imz#altlinux.org>**20160108222850
Ignore-this: e8693b796cd3cac50bb19f4458ddb323
]
<
[A
Ivan Zakharyaschev <imz#altlinux.org>**20160108222807
Ignore-this: 613cadd9266dac24e2bcd2dde97d969a
]
> hunk ./b 1
-b
+b2
Here is the full output:
3 patches for repository /home/imz/tests/test-darcs:
patch 151c8321b2bc36df7ba09dcab0d17c853ed31577
Author: Ivan Zakharyaschev <imz#altlinux.org>
Date: Sat Jan 9 01:28:32 MSK 2016
* B
patch 1f59d082f61f1fb8d57f5f5199869d1fc21b2435
Author: Ivan Zakharyaschev <imz#altlinux.org>
Date: Sat Jan 9 01:28:07 MSK 2016
* A
patch f697ac56e8241f5a906c010650b683638944ebf2
Author: Ivan Zakharyaschev <imz#altlinux.org>
Date: Sat Jan 9 01:28:50 MSK 2016
* B2
New patches:
[B
Ivan Zakharyaschev <imz#altlinux.org>**20160108222832
Ignore-this: 9558077a30e30ba3c99003a4418991d
] addfile ./b
hunk ./b 1
+b
[A
Ivan Zakharyaschev <imz#altlinux.org>**20160108222807
Ignore-this: 613cadd9266dac24e2bcd2dde97d969a
] addfile ./a
hunk ./a 1
+a
[B2
Ivan Zakharyaschev <imz#altlinux.org>**20160108222850
Ignore-this: e8693b796cd3cac50bb19f4458ddb323
]
<
[A
Ivan Zakharyaschev <imz#altlinux.org>**20160108222807
Ignore-this: 613cadd9266dac24e2bcd2dde97d969a
]
> hunk ./b 1
-b
+b2
Context:
Patch bundle hash:
f7ff61d69da702263b2ac613d3ca979ecab5b07b
Not quite convenient.

Related

Weird mercurial Graph

Im trying to interpret a mercurial repository and more specific its Graph. Most of what happened is clear to me, but there are two places, where I need some help understanding what happened.
Problem A: "mq"
o changeset: 93:b0aa6f2898b6
| parent: 91:88cca7c8f32e
| user: test <test#gmail.com>
| date: Tue Feb 19 13:36:00 2019 +0100
| summary: some message
|
| o changeset: 92:0aff2e92ec57
|/ user: test <test#gmail.com>
| date: Tue Feb 12 18:10:10 2019 +0100
| summary: [mq]: 2019-02-12_18-10-04_r91+.diff
|
o changeset: 91:88cca7c8f32e
| user: test <test#gmail.com>
| date: Tue Feb 12 18:09:49002019 +0100
| summary: some message
|
Question 1: What is the meaning of changeset 92. Is this comparable to a rewinded commit in git?
Question 2: What does mq mean in this context?
Problem B: Timestamp that does not match with parent!
o changeset: 62:143401518e68
| parent: 60:327ffdb4b8c3
| user: test <test#gmail.com>
| date: Fri Nov 16 21:19:00 2018 +0100
| summary: some message
|
| o changeset: 61:b4a37ff37688
|/ user: test <test#gmail.com>
| date: Fri Nov 16 16:00:00 2018 +0100
| summary: some message
|
o changeset: 60:327ffdb4b8c3
| user: test <test#gmail.com>
| date: Fri Nov 16 18:10:00 2018 +0100
| summary: some message
|
Question 3: How do I have to interpret the timestamp of changeset 61? Should the timestamp of 61 be between the timestamps of changeset 60 and 62?
Thanks for all suggestions!
Problem A: "mq"
A1+A2: I suppose, it is commit (hg commit --mq), related to using Mercurial Queues extension, which is separate long topic of queues|patches etc. And now you try to understand repo, which used MQ, with your Mercurial without MQ extension enabled (which is tricky task)
No, mq and 0aff2e92ec57 hasn't any relation with git's rewinded commit and serve another role (I'm too lazy to repeat tutorial here)
Problem B: Timestamp that does not match with parent!
A3: As #ecm already noted, timestamps of changesets have near-zero value, because it can be changed|redefined

How to resolve error " Hunk #2 FAILED at 456. 1 out of 2 hunks FAILED"

I am trying to run the following command in ubuntu terminal
patch -p0 -i adjustmentFile.patch
That is giving the following error
patching file ./src/helpStructures/CastaliaModule.cc
patching file ./src/node/communication/mac/tunableMac/TunableMAC.cc
Hunk #2 FAILED at 456.
1 out of 2 hunks FAILED -- saving rejects to file ./src/node/communication/mac/tunableMac/TunableMAC.cc.rej
I tried almost all the ways suggested in the link Hunk #1 FAILED at 1. What's that mean?. However, nothing worked.
Here is my version detail
VIM - Vi IMproved 8.0 (2016 Sep 12, compiled Jun 06 2019 17:31:41)
Included patches: 1-1453
The patch file:
diff -r -u ./src/helpStructures/CastaliaModule.cc ./src/helpStructures/CastaliaModule.cc
--- ./src/helpStructures/CastaliaModule.cc 2010-12-09 09:56:47.000000000 -0300
+++ ./src/helpStructures/CastaliaModule.cc 2011-12-20 00:16:39.944320051 -0300
## -180,6 +180,8 ##
classPointers.resourceManager = getParentModule()->getParentModule()->getSubmodule("ResourceManager");
else if (name.compare("SensorManager") == 0)
classPointers.resourceManager = getParentModule()->getSubmodule("ResourceManager");
+ else if (name.compare("Routing") == 0)
+ classPointers.resourceManager = getParentModule()->getParentModule()->getSubmodule("ResourceManager");
else
opp_error("%s module has no rights to call drawPower() function", getFullPath().c_str());
if (!classPointers.resourceManager)
Only in ./src/helpStructures: CastaliaModule.cc~
diff -r -u ./src/node/communication/mac/tunableMac/TunableMAC.cc ./src/node/communication/mac/tunableMac/TunableMAC.cc
--- ./src/node/communication/mac/tunableMac/TunableMAC.cc 2011-03-30 02:14:34.000000000 -0300
+++ ./src/node/communication/mac/tunableMac/TunableMAC.cc 2011-12-19 23:57:43.894686687 -0300
## -405,6 +405,8 ##
void TunableMAC::fromRadioLayer(cPacket * pkt, double rssi, double lqi)
{
TunableMacPacket *macFrame = dynamic_cast <TunableMacPacket*>(pkt);
+ macFrame->getMacRadioInfoExchange().RSSI = rssi;
+ macFrame->getMacRadioInfoExchange().LQI = lqi;
if (macFrame == NULL){
collectOutput("TunableMAC packet breakdown", "filtered, other MAC");
return;
## -454,7 +456,8 ##
}
case DATA_FRAME:{
- toNetworkLayer(macFrame->decapsulate());
+ cPacket *netPkt = decapsulatePacket(macFrame);
+ toNetworkLayer(netPkt);
collectOutput("TunableMAC packet breakdown", "received data pkts");
if (macState == MAC_STATE_RX) {
cancelTimer(ATTEMPT_TX);
Only in ./src/node/communication/mac/tunableMac: TunableMAC.cc~
Patching takes some changes made to a file X, and applies them to a different instance of file X. That is, suppose you start with generation 1 of file X; you make changes to get generation 2-a, and someone else starts with generation 1 to make generation 2-b. Now you want to take his edits that created his generation 2-b, and apply them to your generation 2-a.
If 'his' changes clash with 'your' changes, they cannot be automatically patched.
You'll need to look at the changes being made in hunk 2.
- toNetworkLayer(macFrame->decapsulate());
+ cPacket *netPkt = decapsulatePacket(macFrame);
+ toNetworkLayer(netPkt);
and figure out what you want the result to look like. Someone needs to know what the result is supposed to be. You can't resolve conflicts without knowledge of intent.

Snakemake: MissingInputException in snakemake pipeline

I'm trying a SnakeMake pipeline and I'm stucked on an error I really don't understand.
I've got a directory (raw_data) in which I have the input files :
ll /home/nico/labo/etudes/Optimal/data/raw_data
total 41M
drwxrwxr-x 2 nico nico 4,0K mars 6 16:09 ./
drwxrwxr-x 11 nico nico 4,0K mars 6 16:14 ../
-rw-rw-r-- 1 nico nico 15M févr. 27 12:21 sampleA_R1.fastq.gz
-rw-rw-r-- 1 nico nico 19M févr. 27 12:22 sampleA_R2.fastq.gz
-rw-rw-r-- 1 nico nico 3,4M févr. 27 12:21 sampleB_R1.fastq.gz
-rw-rw-r-- 1 nico nico 4,3M févr. 27 12:22 sampleB_R2.fastq.gz
This directory contains 4 files for 2 samples.
I created a config json file for the SnakeMake pipeline named config_snakemake_Optimal_mapping_BaL.json:
{
"fastqExtension": "fastq.gz",
"fastqDir": "/home/nico/labo/etudes/Optimal/data/raw_data",
"outputDir": "/home/nico/labo/etudes/Optimal/data/mapping_BaL",
"logDir": "logs",
"reference": {
"fasta": "/home/nico/labo/references/genomes/HIV1/BaL_AY713409/BaL_AY713409.fasta",
"index": "/home/nico/labo/references/genomes/HIV1/BaL_AY713409/BaL_AY713409.fasta.bwt"
}
}
And finally the SnakeMake file snakefile_bwa_samtools.py:
import subprocess
from os.path import join
### Globals ---------------------------------------------------------------------
# A Snakemake regular expression matching fastq files.
SAMPLES, = glob_wildcards(join(config["fastqDir"], "{sample}_R1."+config["fastqExtension"]))
print(SAMPLES)
### Rules -----------------------------------------------------------------------
# Pipeline output files
rule all:
input: expand(join(config["outputDir"], "{sample}.bam.bai"), sample=SAMPLES)
# Reads alignment on reference genome and BAM file creation
rule bwa_mem_to_bam:
input:
index = config["reference"]["index"],
fasta = config["reference"]["fasta"],
fq1_ID = "{sample}_R1."+config["fastqExtension"],
fq2_ID = "{sample}_R2."+config["fastqExtension"],
fq1 = join(config["fastqDir"], "{sample}_R1."+config["fastqExtension"]),
fq2 = join(config["fastqDir"], "{sample}_R2."+config["fastqExtension"])
output:
temp(join(config["outputDir"], "{sample}.bamUnsorted"))
version:
subprocess.getoutput(
"man bwa | tail -n 1 | cut -d ' ' -f 1 | cut -d '-' -f 2"
)
log:
join(config["outputDir"], config["logDir"], "{sample}.bwa_mem.log")
message:
"Alignment of {input.fq1_ID} and {input.fq2_ID} on {input.fasta} with BWA version {version}."
shell:
"bwa mem {input.fasta} {input.fq1} {input.fq2} 2> {log} | samtools view -Sbh - > {output}"
# Sorting the BAM files on genomic positions
rule bam_sort:
input:
join(config["outputDir"], "{sample}.bamUnsorted")
output:
join(config["outputDir"], "{sample}.bam")
log:
join(config["outputDir"], config["logDir"], "{sample}.samtools_sort.log")
version:
subprocess.getoutput(
"samtools --version | "
"head -1 | "
"cut -d' ' -f2"
)
message:
"Genomic sorting of {input} with samtools version {version}."
shell:
"samtools sort -f {input} {output} 2> {log}"
# Indexing the BAM files
rule bam_index:
input:
join(config["outputDir"], "{sample}.bam")
output:
join(config["outputDir"], "{sample}.bam.bai")
message:
"Indexing {input}."
shell:
"samtools index {input}"
I run this pipeline:
snakemake --cores 3 --snakefile /home/nico/labo/scripts/pipeline_illumina/snakefile_bwa_samtools.py --configfile /home/nico/labo/etudes/Optimal/data/snakemake_config_files/config_snakemake_Optimal_mapping_BaL.json
and I've got the following error outputs:
['sampleB', 'sampleA']
MissingInputException in line 18 of /home/nico/labo/scripts/pipeline_illumina/snakefile_bwa_samtools.py:
Missing input files for rule bwa_mem_to_bam:
sampleB_R1.fastq.gz
sampleB_R2.fastq.gz
or depending the moment:
['sampleB', 'sampleA']
PeriodicWildcardError in line 40 of /home/nico/labo/scripts/pipeline_illumina/snakefile_bwa_samtools.py:
The value _unsorted in wildcard sample is periodically repeated (sampleB_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted_unsorted). This would lead to an infinite recursion. To avoid this, e.g. restrict the wildcards in this rule to certain values.
The samples are correctly detected as they appear in the list (first line of kind of outputs) and I'm surely messing around with the wildcards in the rule bwa_mem_to_bam, but I really don't get why..
Any clue?
I quickly looked your code.
Why didn't the first one work out?
Look when you declare fq1_ID and fq1, same for sample 2. You didn't assign the same string. For fq1 you add a repertory for the file witch is not present for fq1_ID so snakemake is searching it in the workdir (current directory if -d option is not set) a file name with your string. Beacuse these variables are in input section.
So by removing the two fq1/2_ID, it will erase all files searching problems.
Hugo
Finally, I succed with the pipeline removing the fq1_ID and fq2_ID variables in the rule bwa_mem_to_bam and replacing in the message of the rule input.fq1_ID and input.fq2_ID by input.fq1 and input.fq2.
The message is less elegant, but the pipeline is running correctly. Still doesn't understand exactly where was the mistake, if someone can explain, I'm still listening!
The correct code for rule bwa_mem_to_bam:
rule bwa_mem_to_bam:
input:
index = config["reference"]["index"],
fasta = config["reference"]["fasta"],
fq1 = join(config["fastqDir"], "{sample}_R1."+config["fastqExtension"]),
fq2 = join(config["fastqDir"], "{sample}_R2."+config["fastqExtension"])
output:
temp(join(config["outputDir"], "{sample}.bamUnsorted"))
version:
subprocess.getoutput(
"man bwa | tail -n 1 | cut -d ' ' -f 1 | cut -d '-' -f 2"
)
log:
join(config["outputDir"], config["logDir"], "{sample}.bwa_mem.log")
message:
"Alignment of {input.fq1} and {input.fq2} on {input.fasta} with BWA version {version}."
shell:
"bwa mem {input.fasta} {input.fq1} {input.fq2} 2> {log} | samtools view -Sbh - > {output}"
Thanks Hugo for checking my code and your explanation, it makes sense!
I finally get a flash idea waking up this morning (the best ones), and realized that I neglected the params part of the rule, fq1_ID and fq2_ID are not inputs but params..
I changed the code to that:
rule bwa_mem_to_bam:
input:
index = config["reference"]["index"],
fasta = config["reference"]["fasta"],
fq1 = join(config["fastqDir"], "{sample}_R1.fastq.gz"),
fq2 = join(config["fastqDir"], "{sample}_R2.fastq.gz")
output:
temp(join(config["outputDir"],"{sample}_unsorted.bam"))
params:
fq1_ID = "{sample}_R1.fastq.gz",
fq2_ID = "{sample}_R2.fastq.gz",
ref_ID = os.path.basename(config["reference"]["fasta"])
version:
subprocess.getoutput(
"man bwa | tail -n 1 | cut -d ' ' -f 1 | cut -d '-' -f 2"
)
log:
join(config["outputDir"], config["logDir"], "{sample}.bwa_mem.log")
message:
"Alignment of {params.fq1_ID} and {params.fq2_ID} on {params.ref_ID} with BWA version {version}."
shell:
"bwa mem {input.fasta} {input.fq1} {input.fq2} 2> {log} | samtools view -Sbh - > {output}"
And it works just fine!
snakemake --cores 3 --snakefile /home/nico/labo/scripts/pipeline_illumina/snakefile_bwa_samtools.py --configfile /home/nico/labo/etudes/Optimal/data/snakemake_config_files/config_snakemake_Optimal_mapping_BaL.json
Provided cores: 3
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 all
2 bam_index
2 bam_sort
2 bwa_mem_to_bam
7
Alignment of sampleB_R1.fastq.gz and sampleB_R2.fastq.gz on BaL_AY713409.fasta with BWA version 0.7.12.
Alignment of sampleA_R1.fastq.gz and sampleA_R2.fastq.gz on BaL_AY713409.fasta with BWA version 0.7.12.
1 of 7 steps (14%) done
Genomic sorting of sampleB_unsorted.bam with samtools version 1.2.
Removing temporary output file /home/nico/labo/etudes/Optimal/data/mapping_BaL/sampleB_unsorted.bam.
2 of 7 steps (29%) done
Indexing sampleB.bam.
3 of 7 steps (43%) done
4 of 7 steps (57%) done
Genomic sorting of sampleA_unsorted.bam with samtools version 1.2.
Removing temporary output file /home/nico/labo/etudes/Optimal/data/mapping_BaL/sampleA_unsorted.bam.
5 of 7 steps (71%) done
Indexing sampleA.bam.
6 of 7 steps (86%) done
localrule all:
input: /home/nico/labo/etudes/Optimal/data/mapping_BaL/sampleB.bam.bai, /home/nico/labo/etudes/Optimal/data/mapping_BaL/sampleA.bam.bai
7 of 7 steps (100%) done
And finally get my correct messages:
Alignment of sampleB_R1.fastq.gz and sampleB_R2.fastq.gz on
BaL_AY713409.fasta with BWA version 0.7.12.
Alignment of sampleA_R1.fastq.gz and sampleA_R2.fastq.gz on BaL_AY713409.fasta
with BWA version 0.7.12.

yosys fails at ABC pass (on counter.v demo)

I hope someone can help me with this...
This is my first encounter with yosys. For the start, I'm trying to run the very same demo as Clifford explained in his presentation. I downloaded the demo at the following location: https://github.com/cliffordwolf/yosys/tree/master/manual/PRESENTATION_Intro
yosys run beaks at the ABC pass with following message:
12. Executing ABC pass (technology mapping using ABC).
12.1. Extracting gate netlist of module `\counter' to `<abc-temp-dir>/input.blif'..
Extracted 6 gates and 12 wires to a netlist network with 4 inputs and 2 outputs.
12.1.1. Executing ABC.
Running ABC command: <yosys-exe-dir>/yosys-abc -s -f <abc-temp-dir>/abc.script 2>&1
ABC: ABC command line: "source <abc-temp-dir>/abc.script".
ABC:
ABC: + read_blif <abc-temp-dir>/input.blif
ABC: + read_lib -w /home/boris/Documents/Self Learning/yosys_synthesys/mycells.lib
ABC: usage: read_lib [-SG float] [-M num] [-dnvwh] <file>
ABC: reads Liberty library from file
ABC: -S float : the slew parameter used to generate the library [default = 0.00]
ABC: -G float : the gain parameter used to generate the library [default = 0.00]
ABC: -M num : skip gate classes whose size is less than this [default = 0]
ABC: -d : toggle dumping the parsed library into file "*_temp.lib" [default = no]
ABC: -n : toggle replacing gate/pin names by short strings [default = no]
ABC: -v : toggle writing verbose information [default = yes]
ABC: -v : toggle writing information about skipped gates [default = yes]
ABC: -h : prints the command summary
ABC: <file> : the name of a file to read
ABC: ** cmd error: aborting 'source <abc-temp-dir>/abc.script'
ERROR: Can't open ABC output file `/tmp/yosys-abc-KDGya6/output.blif'.
[boris#E7440 yosys_synthesys]$
I have had a look at the file location mentioned in the error statement above, there is no output.blif in there:
[boris#E7440 yosys_synthesys]$ ll /tmp/yosys-abc-KDGya6/
total 12K
-rw-rw-r--. 1 boris boris 542 Jul 5 11:21 abc.script
-rw-rw-r--. 1 boris boris 526 Jul 5 11:21 input.blif
-rw-rw-r--. 1 boris boris 852 Jul 5 11:21 stdcells.genlib
[boris#E7440 yosys_synthesys]$
Buy the way, here is some system/tools info that might be relevant for debugging:
Linux E7440.DELL 4.4.13-200.fc22.x86_64 #1 SMP Wed Jun 8 15:59:40 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
Yosys 0.6+141 (git sha1 080f95f, gcc 5.3.1 -fPIC -Os)
UC Berkeley, ABC 1.01 (compiled Mar 8 2015 01:00:49)
The issue has been resolved...
Solution =
Changed rundir from:
/home/boris/Documents/Self Learning/yosys_synthesys/mycells.lib
to:
/home/boris/Documents/SelfLearning/yosys_synthesys/mycells.lib
Lesson learned =
ABC tool does not accept space characters in the path/file name.

cron command to run every 12 hours

I need unix cron command to run every 12 hours.
I have 500+ sub blogs in my server.
This is the file i want to run every 12 hours
http://*.mysite.com/somedir/index.php
Where * is my subdomain of my blogs.
I need cron command for all blogs.
Is it possible to run all of them with single command?
OR do i have to create command for each blog?
A crontab file has five fields for specifying day , date and time followed by the command to be run at that interval.
* * * * * command to be executed
- - - - -
| | | | |
| | | | +----- day of week (0 - 6) (Sunday=0)
| | | +------- month (1 - 12)
| | +--------- day of month (1 - 31)
| +----------- hour (0 - 23)
+------------- min (0 - 59)
* in the value field above means all legal values as in braces for that column.
You could use 0 1,13 * * * which means for every 1AM and 1PM.
0 1,13 * * * rm /var/www/*/somedir/index.php > /home/someuser/cronlogs/some.log 2>&1
where * can be replaced by different domain names.
I think the right way is -> 1 */12 * * * (actually, any number in the minute position will do the trick.)
If you set -> * */12 * * * it will be executed every minute at 12h and again at 24h.
Assuming your sites live in /var/www/sitename and you have the php shell installed in /usr/bin/php you can easily create a cron job that runs all those files.
run
crontab -e
and add this line
42 */12 * * * /usr/bin/php /var/www/*/somedir/index.php >> ~/cronjob.log 2>&1
The * here in /var/www/*/somedir is just a wildcart. This means it will catch every directory in your /var/ww folder.
f.ex:
[jens#localhost ~]$ ls -l temp
total 28
-rw-rw-r--. 1 jens jens 1641 Feb 21 16:12 somefile.py
drwxrwxr-x. 2 jens jens 4096 Feb 22 15:10 test
drwxrwxr-x. 2 jens jens 4096 Feb 22 15:10 test2
drwxrwxr-x. 2 jens jens 4096 Feb 22 15:10 test3
drwxr-xr-x. 8 jens jens 4096 Jan 27 10:21 emptydir
-rw-rw-r--. 1 jens jens 548 Jan 27 16:15 Unsaved Document 1
[jens#localhost ~]$ ls temp/*/testfile.php
temp/test2/testfile.php temp/test3/testfile.php temp/test/testfile.php
As you can see, this returns the testfile.php in each subfolder of temp, namely folder test, test2 and test3.
Emptydir is also a folder, but since it has no testfile.php in it, nothing willhappen with it.
If your directory structure is arbitrarily deep you can use **
e.g.
42 */12 * * * /usr/bin/php /var/www/**/index.php >> ~/cronjob.log 2>&1
Use "*/12" to mean "every 12 hours."
You need some kind of master-script (called by cron), which expands the list of sites, and calls "/usr/bin/php /var/www/*/somedir/index.php", whith the '*' replaced by a list entry. This can be done in a shellscript, a perl or python script, or maybe even a php script. For sh this could be: (untested)
#!/bin/sh
cd /home/subdir/for/cron
LIST="a b c d e f g h i j k l m o p q r s t u v w x y z"
for x in $LIST; do
/usr/bin/php /var/www/${x}/somedir/index.php 2>$1 > /tmp/${x}.log
done
If it is inconvenient to have the list hardcoded like this, there are other methods:
backticks, or read < file_with_all_the_names_in_it
0 */12 * * * means "At minute 0 past every 12th hour."
Check out https://crontab.guru for a nice calculator.
Write command in console
crontab -e
edit with editor (I like nano)
add line
0 1,13 * * * php /home/catalog/public_html/crons/index.php
close with
press ctrl + x
press y then press enter
done :)
Check if saved with
crontab -l
command
if you want to test if it will work test just running it manualy with
php /home/catalog/public_html/crons/index.php
command
Use this it will Run after each 12 hour
* */12 * * * php /var/www/"Your domain"/cronfile.php
->cron('0 */12 * * *');
This cron will run the scheduler at every 12 hours.

Resources