I have made a GUI (using PySimpleGUI) where you can play against Stockfish (I used python-chess module). I make an .exe-file using Pyinstaller --noconsole, but when i run it, it opens Stockfish in a console. When I run it form source, in PyCharm, Stockfish runs silently in the background.
The relevant lines of code are (I guess):
engine = chess.engine.SimpleEngine.popen_uci(engine_filename, shell = False)
and, a bit later,
best_move = engine.play(board, chess.engine.Limit(depth=20)).move
Any advice on how I can make Stockfish run silently in the background also form the .exe-file?
Define your engine like below.
import subprocess
engine = chess.engine.SimpleEngine.popen_uci(
engine_filename,
shell = False,
creationflags=subprocess.CREATE_NO_WINDOW)
See python subprocess ref.
Related
My goal is to create a presentation with Jupyter notebook without code input.
I have tried the following code
!jupyter nbconvert Explanatory_Analysis.ipynb --to slides --post serve --no-input --no-prompt
This code is prompting the NotImplementedError
Here's a somewhat hacky solution.
Paste the following code into a new code cell, then execute the cell.
Be sure to change the NOTEBOOK variable to the filename of the current notebook and SAVE the notebook BEFORE running.
The hackiest thing about it is that the code overwrites the current notebook, so you'll need to refresh the juptyer page in your browser after running the script.
import nbformat as nbf
import os
NOTEBOOK = "Explanatory_Analysis.ipynb"
PATH = f'{os.path.abspath("")}/{NOTEBOOK}'
ntbk = nbf.read(PATH, nbf.NO_CONVERT)
for i, cell in enumerate(ntbk.cells):
if cell.cell_type == "code":
metadata = cell["metadata"]
slideshow = metadata.get("slideshow", {})
print(f"[cell#index={i}] {cell.cell_type=}")
print(f"BEFORE {metadata=}, {slideshow=}")
slideshow["slide_type"] = "skip"
metadata["slideshow"] = slideshow
print(f"AFTER {metadata=}, {slideshow=}")
nbf.write(ntbk, PATH)
Looking at this page and this piece of code in particular:
import boto3
account_id = boto3.client("sts").get_caller_identity().get("Account")
region = boto3.session.Session().region_name
ecr_repository = "r-in-sagemaker-processing"
tag = ":latest"
uri_suffix = "amazonaws.com"
processing_repository_uri = "{}.dkr.ecr.{}.{}/{}".format(
account_id, region, uri_suffix, ecr_repository + tag
)
# Create ECR repository and push Docker image
!docker build -t $ecr_repository docker
!$(aws ecr get-login --region $region --registry-ids $account_id --no-include-email)
!aws ecr create-repository --repository-name $ecr_repository
!docker tag {ecr_repository + tag} $processing_repository_uri
!docker push $processing_repository_uri
This is not pure Python obviously? Are these AWS CLI commands? I have used docker previously but I find this example very confusing. Is anyone aware of an end-2-end example of simply running some R job in AWS using sage maker/docker? Thanks.
This is Python code mixed with shell script magic calls (the !commands).
Magic commands aren't unique to this platform, you can use them in Jupyter, but this particular code is meant to be run on their platform. In what seems like a fairly convoluted way of running R scripts as processing jobs.
However, the only thing you really need to focus on is the R script, and the final two cell blocks. The instruction at the top (don't change this line) creates a file (preprocessing.R) which gets executed later, and then you can see the results.
Just run all the code cells in that order, with your own custom R code in the first cell. Note the line plot_key = "census_plot.png" in the last cell. This refers to the image being created in the R code. As for other output types (eg text) you'll have to look up the necessary Python package (PIL is an image manipulation package) and adapt accordingly.
Try this to get the CSV file that the R script is also generating (this code is not validated, so you might need to fix any problems that arise):
import csv
csv_key = "plot_data.csv"
csv_in_s3 = "{}/{}".format(preprocessed_csv_data, csv_key)
!aws s3 cp {csv_in_s3} .
file = open(csv_key)
dat = csv.reader(file)
display(dat)
So now you should have an idea of how two different output types the R script example generates are being handled, and from there you can try and adapt your own R code based on what it outputs.
I have a simple script that in Rstudio works to deploy app:
rsconnect::setAccountInfo(name='xx', token='xx', secret='xx/xx')
library(rsconnect)
deployApp("xxx",launch.browser = FALSE)
After this prompt appears:
Update application currently deployed at https://xxx.shinyapps.io/xx/?
that block my scheduled script.
There's a way to skip this request and update the shiny app without manually type "Y" in the Console?
Adding to what waskuf said, try adding forceUpdate = T to your code.
deployApp("xxx", launch.browser = F, forceUpdate = T)
Worked for me, at least.
It works if you just write an unquoted Y in your script after the "deployApp" command and run it in one batch. Like this:
rsconnect::setAccountInfo(name='xx', token='xx', secret='xx/xx')
library(rsconnect)
deployApp("xxx", launch.browser = FALSE)
Y
Just make sure the lines including deployApp(...) and Y are both selected and not seperated by any other commands when executed.
I am working on a little interactive shell-like tool in R that uses readline to prompt stdin, like this:
console <- function(){
while(nchar(input <- readline(">>> "))) {
message("You typed: ", input)
}
}
It works but the only thing that bothers me is that lines entered this way do not get pushed upon the history stack. Pressing the up-arrow in R gives the last R command that was entered before starting the console.
Is there any way I can manually push the input lines upon the history stack, such that pressing the up-arrow will show the latest line entered in the console function?
I use this in rite to add commands to the command history. In essence, you can just savehistory and loadhistory from a local file. I do:
tmphistory <- tempfile()
savehistory(tmphistory)
histcon <- file(tmphistory, open="a")
writeLines(code, histcon)
close(histcon)
loadhistory(tmphistory)
unlink(tmphistory)
Note: Mac doesn't use history in the same way as other OS's, so be careful with this.
These are the steps I am trying to achieve:
Upload a PDF document on the server.
Convert the PDF document to a set of images using GhostScript (every page is converted to an image).
Send the collection of images back to the client.
So far, I am interested in #2.
First, I downloaded both gswin32c.exe and gsdll32.dll and managed to manually convert a PDF to a collection of images(I opened cmd and run the command bellow):
gswin32c.exe -dSAFER -dBATCH -dNOPAUSE -sDEVICE=jpeg -r150 -dTextAlphaBits=4 -dGraphicsAlphaBits=4 -dMaxStripSize=8192 -sOutputFile=image_%d.jpg somepdf.pdf
Then I thought, I'll put gswin32c.exe and gsdll32.dll into ClientBin of my web project, and run the .exe via a Process.
System.Diagnostics.Process process1 = new System.Diagnostics.Process();
process1.StartInfo.WorkingDirectory = Request.MapPath("~/");
process1.StartInfo.FileName = Request.MapPath("ClientBin/gswin32c.exe");
process1.StartInfo.Arguments = "-dSAFER -dBATCH -dNOPAUSE -sDEVICE=jpeg -r150 -dTextAlphaBits=4 -dGraphicsAlphaBits=4 -dMaxStripSize=8192 -sOutputFile=image_%d.jpg somepdf.pdf"
process1.Start();
Unfortunately, nothing was output in ClientBin. Anyone got an idea why? Any recommendation will be highly appreciated.
I've tried your code and it seem to be working fine. I would recommend checking following things:
verify if your somepdf.pdf is in the working folder of the gs process or specify the full path to the file in the command line. It would also be useful to see ghostscript's output by doing something like this:
....
process1.StartInfo.RedirectStandardOutput = true;
process1.StartInfo.UseShellExecute = false;
process1.Start();
// read output
string output = process1.StandardOutput.ReadToEnd();
...
process1.WaitForExit();
...
if gs can't find your file you would get an "Error: /undefinedfilename in (somepdf.pdf)" in the output stream.
another suggestion is that you proceed with your script without waiting for the gs process to finish and generate resulting image_N.jpg files. I guess adding process1.WaitForExit should solve the issue.