I'm trying to preprocess the image but is stuck at argparse itself, please help me - jupyter-notebook

I'm trying to learn East Model for text recognition, I want to input different images to the code each time and trying argparse but is stuck.
How do we pass image path to (ap.add_argument("-i", '--image', type=str)
path is C:\Users\nishant\Desktop\Use_case\screenshot\hqdefault.png
import argparse
ap = argparse.ArgumentParser()
ap.add_argument("-i", '--image', type=str)
ap.add_argument("-east", "--east", type=str)
ap.add_argument("-c", "--min-confidence", type=float, default=0.5,)
ap.add_argument("-w", "--width", type=int, default=320)
ap.add_argument("-e", "--height", type=int, default=320)
ap.add_argument("-p", "--padding", type=float, default=0.0)
args = vars(ap.parse_args())
usage: ipykernel_launcher.py [-h] [-i IMAGE] [-east EAST] [-c MIN_CONFIDENCE]
[-w WIDTH] [-e HEIGHT] [-p PADDING]
ipykernel_launcher.py: error: unrecognized arguments: -f C:\Users\777569\AppData\Roaming\jupyter\runtime\kernel-a0ced498-94e0-406e-9bf5-8f8c125ff96a.json
An exception has occurred, use %tb to see the full traceback.
SystemExit: 2

I fixed mine by changing the last line, like this:
args = vars(ap.parse_args()) to args = ap.parse_args(args=[])

Related

snakemake Wildcards in input files cannot be determined from output files:

I use the snakemkae to create a pipeline to split bam by chr,but I met a problem,
Wildcards in input files cannot be determined from output files:
'OutputDir'
Can someone help me to figure it out ?
if config['ref'] == 'hg38':
ref_chr = []
for i in range(1,23):
ref_chr.append('chr'+str(i))
ref_chr.extend(['chrX','chrY'])
elif config['ref'] == 'b37':
ref_chr = []
for i in range(1,23):
ref_chr.append(str(i))
ref_chr.extend(['X','Y'])
rule all:
input:
expand(f"{OutputDir}/split/{name}.{{chr}}.bam",chr=ref_chr)
rule minimap2:
input:
TargetFastq
output:
Sortbam = "{OutputDir}/{name}.sorted.bam",
Sortbai = "{OutputDir}/{name}.sorted.bam.bai"
resources:
mem_mb = 40000
threads: nt
singularity:
OntSoftware
shell:
"""
minimap2 -ax map-ont -d {ref_mmi} --MD -t {nt} {ref_fasta} {input} | samtools sort -O BAM -o {output.Sortbam}
samtools index {output.Sortbam}
"""
rule split_bam:
input:
rules.minimap2.output.Sortbam
output:
splitBam = expand(f"{OutputDir}/split/{name}.{{chr}}.bam",chr=ref_chr),
splitBamBai = expand(f"{OutputDir}/split/{name}.{{chr}}.bam.bai",chr=ref_chr)
resources:
mem_mb = 30000
threads: nt
singularity:
OntSoftware
shell:
"""
samtools view -# {nt} -b {input} {chr} > {output.splitBam}
samtools index -# {nt} {output.splitBam}
"""
I change the wilcards {outputdir},but is dose not help.
expand(f"{OutputDir}/split/{name}.{{chr}}.bam",chr=ref_chr),
splitBamBai = expand(f"{OutputDir}/split/{name}.{{chr}}.bam.bai",chr=ref_chr),
A couple of comments on this lines...:
You escape chr by using double braces, {{chr}}. This means you don't want chr to be expanded, which I doubt it is correct. I suspect you want something like:
expand("{{OutputDir}}/split/{{name}}.{chr}.bam",chr=ref_chr),
The rule minimpa2 does not contain {chr} wildcard, hence the error you get.
As an aside, when you create a bam file and its index in the same rule, you can get the time stamp of the index file to be older than the bam file itself. This later can generate spurious warning from samtools/bcftools. See https://github.com/snakemake/snakemake/issues/1378 (not sure if it's been fixed).

When using jupyter_client how do I get data in HTML?

I'm wondering if jupyter_client is able to return code sent in to the execute function as HTML somehow?
I'm also wondering if I can do the same with stdout and stderr, as well as markdown?
If jupyter_client cannot do this, is there a jupyter library that does?
Adapting the solution from here might help. This adaptation takes the request of 1+1 or msg_id=c.execute('1+1') and returns the result formatted in html as bold red text with this display(HTML('<div style="color:Red;"><b>' + res + '</b></div>')) using IPython's display module. The kernel info status has been commented out but left for reference.
from subprocess import PIPE
from jupyter_client import KernelManager
from IPython.display import display, HTML
from queue import Empty
km = KernelManager(kernel_name='python3')
km.start_kernel()
# print(km.is_alive())
try:
c = km.client()
msg_id=c.execute('1+1')
state='busy'
data={}
while state!='idle' and c.is_alive():
try:
msg=c.get_iopub_msg(timeout=1)
if not 'content' in msg: continue
content = msg['content']
if 'data' in content:
data=content['data']
if 'execution_state' in content:
state=content['execution_state']
except Empty:
pass
res = data['text/plain']
# print(data)
display(HTML('<div style="color:Red;"><b>' + res + '</b></div>'))
except KeyboardInterrupt:
pass
finally:
km.shutdown_kernel()
# print(km.is_alive())
Also see here for more info.

python3 mime and file object not working

I'm trying to use the below mail function for python3 which is throwing error NameError: name 'file' is not defined which its works perfectly for python2.
I got to know file() is not supported in Python 3 what will be substitute of file.
#!/usr/bin/env python3
from subprocess import Popen, PIPE
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
import os
############ File comparison & sendmail part starts here ########
def ps_Mail():
filename = "/tmp/ps_msg"
f = file(filename)
if os.path.exists(filename) and os.path.getsize(filename) > 0:
mailp = Popen(["/usr/sbin/sendmail", "-t", "-oi"], stdin=PIPE)
msg = MIMEMultipart('alternative')
msg['To'] = "sam#seemac.com"
msg['Subject'] = "Uhh!! Unsafe process seen"
msg['From'] = "psCheck#seemac.com"
msg1 = MIMEText(filename.read(), 'text')
msg.attach(msg1)
mailp.communicate(msg.as_string())
ps_Mail()
I have edited your code and this should work, please try this...
There are two key things to change universal_newlines=True and use open() instead of file().
#!/usr/bin/env python3
from subprocess import Popen, PIPE
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
import os
############ File comparison & sendmail part starts here ########
def ps_Mail():
filename = "/tmp/ps_msg"
f = open(filename)
if os.path.exists(filename) and os.path.getsize(filename) > 0:
mailp = Popen(["/usr/sbin/sendmail", "-t", "-oi"], stdin=PIPE, universal_newlines=True)
msg = MIMEMultipart('alternative')
msg['To'] = "sam#seemac.com"
msg['Subject'] = "Uhh!! Unsafe process seen"
msg['From'] = "psCheck#seemac.com"
msg1 = MIMEText(filename.read(), 'text')
msg.attach(msg1)
mailp.communicate(msg.as_string())
ps_Mail()
For more details....
What is the difference between using universal_newlines=True (with bufsize=1) and using default arguments with Popen
The default values are: universal_newlines=False (meaning input/output is accepted as bytes, not Unicode strings plus the universal newlines mode handling (hence the name of the parameter though text_mode might have been a better name here) is disabled -- you get binary data as is (unless POSIX layer on Windows messes it up) and bufsize=-1 (meaning the streams are fully buffered -- the default buffer size is used).
universal_newlines=True uses locale.getpreferredencoding(False) character encoding to decode bytes (that may be different from ascii encoding used in your code).
If universal_newlines=False then for line in Robocopy.stdout: iterates over b'\n'-separated lines. If the process uses non-ascii encoding e.g., UTF-16 for its output then even if os.linesep == '\n' on your system; you may get a wrong result. If you want to consume text lines, use the text mode: pass universal_newlines=True or use io.TextIOWrapper(process.stdout) explicitly.
The second version does include universal_newlines and therefore I specify a bufsize.
In general, It is not necessary to specify bufsize if you use universal_newlines (you may but it is not required). And you don't need to specify bufsize in your case. bufsize=1 enables line-bufferred mode (the input buffer is flushed automatically on newlines if you would write to process.stdin) otherwise it is equivalent to the default bufsize=-1.

How can I get a one line per test result with Robot Framework?

I want to take test case results from Robot Framework runs and import those results into other tools (ElasticSearch, ALM tools, etc).
Towards that end I would like to be able to generate a text file with one line per test. Here is an example line pipe delimited:
testcase name | time run | duration | status
There are other fields I would add but those are the basic ones. Any help appreciated. I have been looking at robot.result http://robot-framework.readthedocs.io/en/3.0.2/autodoc/robot.result.html but haven't figured it out yet. If/when I do I will post answer here.
Thanks,
The output.xml file is very easy to parse with normal XML parsing libraries.
Here's a quick example:
from __future__ import print_function
import xml.etree.ElementTree as ET
from datetime import datetime
def get_robot_results(filepath):
results = []
with open(filepath, "r") as f:
xml = ET.parse(f)
root = xml.getroot()
if root.tag != "robot":
raise Exception("expect root tag 'robot', got '%s'" % root.tag)
for suite_node in root.findall("suite"):
for test_node in suite_node.findall("test"):
status_node = test_node.find("status")
name = test_node.attrib["name"]
status = status_node.attrib["status"]
start = status_node.attrib["starttime"]
end = status_node.attrib["endtime"]
start_time = datetime.strptime(start, '%Y%m%d %H:%M:%S.%f')
end_time = datetime.strptime(end, '%Y%m%d %H:%M:%S.%f')
elapsed = str(end_time-start_time)
results.append([name, start, elapsed, status])
return results
if __name__ == "__main__":
results = get_robot_results("output.xml")
for row in results:
print(" | ".join(row))
Bryan is right that it's easy to parse Robot's output.xml using standard XML parsing modules. Alternatively you can use Robot's own result parsing modules and the model you get from it:
from robot.api import ExecutionResult, SuiteVisitor
class PrintTestInfo(SuiteVisitor):
def visit_test(self, test):
print('{} | {} | {} | {}'.format(test.name, test.starttime,
test.elapsedtime, test.status))
result = ExecutionResult('output.xml')
result.suite.visit(PrintTestInfo())
For more details about the APIs used above see http://robot-framework.readthedocs.io/.

How to get sys.exc_traceback form IPython shell.run_code?

My app interfaces with the IPython Qt shell with code something like this:
from IPython.core.interactiveshell import ExecutionResult
shell = self.kernelApp.shell # ZMQInteractiveShell
code = compile(script, file_name, 'exec')
result = ExecutionResult()
shell.run_code(code, result=result)
if result:
self.show_result(result)
The problem is: how can show_result show the traceback resulting from exceptions in code?
Neither the error_before_exec nor the error_in_exec ivars of ExecutionResult seem to give references to the traceback. Similarly, neither sys nor shell.user_ns.namespace.get('sys') have sys.exc_traceback attributes.
Any ideas? Thanks!
Edward
IPython/core/interactiveshell.py contains InteractiveShell._showtraceback:
def _showtraceback(self, etype, evalue, stb):
"""Actually show a traceback. Subclasses may override..."""
print(self.InteractiveTB.stb2text(stb), file=io.stdout)
The solution is to monkey-patch IS._showtraceback so that it writes to sys.stdout (the Qt console):
from __future__ import print_function
...
shell = self.kernelApp.shell # ZMQInteractiveShell
code = compile(script, file_name, 'exec')
def show_traceback(etype, evalue, stb, shell=shell):
print(shell.InteractiveTB.stb2text(stb), file=sys.stderr)
sys.stderr.flush() # <==== Oh, so important
old_show = getattr(shell, '_showtraceback', None)
shell._showtraceback = show_traceback
shell.run_code(code)
if old_show: shell._showtraceback = old_show
Note: there is no need to pass an ExecutionResult object to shell.run_code().
EKR

Resources