I'd like to be able to know the base URL of the Jupyter Notebook server IPython is presently connected to. I'm aware of the notebook.notebookapp.list_running_servers() function which produces output like:
[
{
'base_url': '/',
'hostname': 'localhost',
'notebook_dir': '/home/username/dir-notebook-was-spawned-in',
'password': False,
'pid': 368094,
'port': 8888,
'secure': False,
'sock': '',
'token': '4e7e860527d5333306cb06c594aa2167a7d375294f96c2d9',
'url': 'http://localhost:8888/',
},
...
]
This feels tantalizingly close to what I want since there's a base_url key there, however I don't know how to determine which server in the list is where IPython is actually connected to. The closest approximation I've been able to come up with is to see which server's notebook_dir key most closely matches os.getcwd() but this is obviously imperfect.
Further Findings:
I've now realized that notebook.notebookapp.list_running_servers() is not the right way to go about this because the notebook server and the kernel are not guaranteed to be running in the same place and in that case the function would always return an empty list.
There was some discussion about it in ipyleaflet. You cannot get the base URL only from the back-end, the way we do it in ipyleaflet is to get it in the front-end using window.location.href.
On the command line you can use the following:
jupyter notebook list --json | python3 -c 'import json; import sys; print(json.load(sys.stdin)["base_url"])'
(Remove the ["base_url"] part from that command to see the full dictionary).
In Python, there is base url listed among the output of:
import psutil
psutil.Process().parent().cmdline()
These are derived from discussions on the Jupyter Discourse Forum here and here.
For Binder sessions, it was pointed out here to use the following to get a good listing of details:
env | grep -i jupyter
Related
import { exec } from "https://deno.land/x/exec/mod.ts";
await exec(`git clone https://github.com/vuejs/vue.git`)
when i use git clone https://github.com/vuejs/vue.git in .sh file, It has message in terminal,
but don't have in deno
First, I think it is important to echo what jsejcksn commented:
The exec module is not related to Deno. All modules at https://deno.land/x/... are third-party code. For working with your shell, see Creating a subprocess in the manual.
Deno's way of doing this without a 3rd party library is to use Deno.run.
With that said, if you take a look at exec's README you'll find documentation for what you're looking for under Capture the output of an external command:
Sometimes you need to capture the output of a command. For example, I do this to get git log checksums:
import { exec } from "https://deno.land/x/exec/mod.ts";
let response = await exec('git log -1 --format=%H', {output: OutputMode.Capture});
If you look at exec's code you'll find it uses Deno.run under the hoods. If you like exec you can use it but you might find you like using Deno.run directly instead.
I'm using Firebase with the Cloud Functions feature and FFmpeg.
I see that FFmpeg is now included by default in the pre-installed packages, as you can see here.
This way, I can use it with a spawn command like this:
await spawn('ffmpeg', [
'-y',
'-i',
tempFilePath,
'-vf',
'transpose=2',
targetTempFilePath,
]);
and it works perfectly.
Unfortunately, when I'm trying to stabilize the video with vidstab, it seems that I have the following error:
ChildProcessError: `ffmpeg -i /tmp/1628240712871_edited.mp4 -vf vidstabdetect=result=transforms.trf -an -f null -` failed with code 1
I think it's because libvidstab is not activated with FFmpeg, as stated below:
To enable compilation of this filter, you need to configure FFmpeg with
--enable-libvidstab.
Do you have any idea how I can activate/use it?
Thank you in advance
For your information, I achieved to do it.
I had to upload my own binary. For that, I just downloaded a pre-compiled Debian-based Linux environment binary (here, but you can have your own anywhere else ;) ) and put it inside the functions directory.
After that, I just deployed the functions, as usual.
This would result in the functions being deployed along with the binary.
I can now call my own binary with a command like this:
import { spawn } from 'child-process-promise';
...
await spawn('./ffmpeg', [
// my commands
]);
I hope it helps ;)
I'm using dagster 0.11.3 (the latest as of this writing)
I've created a Dagster pipeline (saved as pipeline.py) that looks like this:
#solid
def return_a(context):
return 12.34
#pipeline(
mode_defs=[
ModeDefinition(
executor_defs=[dask_executor] # Note: dask only!
)
]
)
def the_pipeline():
return_a()
I have the DAGSTER_HOME environment variable set to a directory that contains a file named dagster.yaml, which is an empty file. This should be ok because the defaults are reasonable based on these docs: https://docs.dagster.io/deployment/dagster-instance.
I have an existing Dask cluster running at "scheduler:8786". Based on these docs: https://docs.dagster.io/deployment/custom-infra/dask, I created a run config named config.yaml that looks like this:
execution:
dask:
config:
cluster:
existing:
address: "scheduler:8786"
I have SUCCESSFULLY used this run config with Dagster like so:
$ dagster pipeline execute -f pipeline.py -c config.yaml
(I checked the Dask logs and made sure that it did indeed run on my Dask cluster)
My question is: How can I get Dagit to use this Dask cluster?
The only thing I have found that seems related is this:
https://docs.dagster.io/_apidocs/execution#executors
...but it doesn't even mention Dask as an option (it has dagster.in_process_executor and dagster.multiprocess_executor, which don't seem at all related to dask).
Probably I need to configure dagster-dask, which is documented here: https://docs.dagster.io/_apidocs/libraries/dagster-dask#dask-dagster-dask
...but where do I put that run config when using Dagit? There's no way to feed config.yaml to Dagit, for example.
Some options:
you can manually plug in the values that are in config.yaml in to the dagit playground
you can bind the config directly to the executor if you do not need to change it ever https://docs.dagster.io/concepts/configuration/configured#configured-api
you can create a preset from that config yaml https://docs.dagster.io/tutorial/advanced-tutorial/pipelines#pipeline-config-presets
Given the context, I would recommend the configured API
I want to set some parameters as defined here(https://github.com/nteract/papermill#python-version-support). The catch is, I want to be able to do this via UI. I have a JHub installed on my cluster and while opening it, I want certain parameters to be set by default.
Also, when I pass the parameters via papermill(the above script gets saved somewhere and then I will run it via papermill), I want the latter to override the former.
I tried looking into several topics in pure JuPyter notebooks but in vain.
For the user to access some parameters as soon as her notebook starts, ipython needs to know the startup cells. This can be done via the following commands in case of JuPyterHub:
proxy:
secretToken: "yada yada"
singleuser:
image:
name: some_acc_id.dkr.ecr.ap-south-1.amazonaws.com/demo
tag: 12h
lifecycleHooks:
postStart:
exec:
command: ["/bin/sh", "-c", 'ipython profile create; cd ~/.ipython/profile_default/startup; echo ''run_id = "sample" ''> aviral.py']
imagePullSecret:
enabled: true
registry: some_acc_id.dkr.ecr.ap-south-1.amazonaws.com
username: aws
email: aviral#abc.com
Make sure you are escaping the quotes in the yaml correctly, or simply follow what I have done above.
Once this is done, papermill will override the params but for that, you have to make sure that the cell is tagged as "parameters". For instance, in my jupyterhub, every notebook that starts has run_id variable with the value "sample".
I have developed a desktop application in OCaml under Ubuntu.
Now, I would like to deploy it to a DigitalOcean Ubuntu server (512 MB Memory / 20 GB Disk) that I own. I will use JavaScript programs on the client side to call the executable stored on the server side, then deal with the returned results.
However, I have no idea how to get started.
Someone pointed me to FastCGI, I did see some FastCGI settings in Nginx server. It seems that there are some OCaml libraries to handle FastCGI or CGI: ocamlnet, cgi, CamlGI, etc.
Could anyone tell me which library is stable and suits my need?
Besides, are there some samples of the library and the settings in Nginx server to let me get started?
I don't think the solution I will propose is the less heavy, but it has several advantages :
It allows you to generate the website in Ocaml, so that the interface with your code won't be to hard to do
If needed you will be able to export your whole application directly in Javascript : you won't let your serveur do useless computes that the user could do, and moreover you don't need to rewrite your code in Javascript, Ocsigen can convert your code for you
If some operations need to be performed by the serveur, you can really easily call server side functions from the client side code, and all your code will be written in Ocaml.
It's pretty easy
What is this amazing tool ? Ocsigen ! You can find a complete tutorial here.
Now let's see how you can use it
Install Ocsigen
First if you don't have it, install opam (it will allow you to install ocaml packages). Just follow the instructions on the website (I cannot paste link since I don't have enough reputation points), but basically for ubuntu run :
sudo add-apt-repository ppa:avsm/ppa
sudo apt-get update
sudo apt-get install ocaml ocaml-native-compilers camlp4-extra m4 opam
Then, you need to install Ocsigen. All instructions are here : https://ocsigen.org/install but basically just do :
sudo aptitude install libev-dev libgdbm-dev libncurses5-dev libpcre3-dev libssl-dev libsqlite3-dev libcairo-ocaml-dev m4 opam camlp4-extra
opam install eliom
(Note : you can also install it with apt-get if you don't want to install/use opam, but I prefer using opam to deal with ocaml depends, you can choose a precise version...)
Well it's done, you now have installed ocsigen !
Create the web page
Then to create a basic scaffold site just run
eliom-distillery -name mysite -template basic -target-directory mysite
and to run it :
cd mysite/
make test.byte
You should see a basic page at localhost:8080/.
Now, let's insert your code. Let's imagine it is named myscript and return a string :
let myscript () = "Here is my amazing result"
Add this code before the let () = in the file mysite.eliom, and add just after h2 [pcdata "Welcome from Eliom's distillery!"]; the line :
p [pcdata (Printf.sprintf "My script gives the return function : \"%s\"" (myscript ()))]
This will create a paragraphe (p) whose content (pcdata) contains the result of myscritpt.
For me the whole mysite.eliom gives :
{shared{
open Eliom_lib
open Eliom_content
open Html5.D
}}
module Mysite_app =
Eliom_registration.App (
struct
let application_name = "mysite"
end)
let main_service =
Eliom_service.App.service ~path:[] ~get_params:Eliom_parameter.unit ()
let myscript () = "Here is my amazing result"
let () =
Mysite_app.register
~service:main_service
(fun () () ->
Lwt.return
(Eliom_tools.F.html
~title:"mysite"
~css:[["css";"mysite.css"]]
Html5.F.(body [
h2 [pcdata "Welcome from Eliom's distillery!"];
p [pcdata (Printf.sprintf "My script gives the return function : \"%s\"" (myscript ()))]
])))
(Please note that let application_name = "mysite" must follow the name that you gave to eliom-distillery. If it's not the case your javascript won't be linked)
Let's compile again :
make test.byte
Now at localhost:8080 you can read :
My script gives the return function : "Here is my amazing result"
The result of the script has been included !
Going further
You can also define myscript to be run in client side, take some Post/Get param, or communicate with the page in real time in only a few lines, but if you want to learn more about that just read the ocsigen tutorial !
Interface with Nginx
I'm not sure you really need to interface it with Nginx, since ocsigenserver is supposed to run as an (http) serveur, but if needed you can always put ocsigenserver behing a Nginx serveur by using reverse-proxy (or the reverse thing, you can serve Nginx from ocsigenserver, read the ocsigenserver manual for more details).