Firebase (Cloud Functions) - Compile FFmpeg with video stabilization (vidstab) - firebase

I'm using Firebase with the Cloud Functions feature and FFmpeg.
I see that FFmpeg is now included by default in the pre-installed packages, as you can see here.
This way, I can use it with a spawn command like this:
await spawn('ffmpeg', [
'-y',
'-i',
tempFilePath,
'-vf',
'transpose=2',
targetTempFilePath,
]);
and it works perfectly.
Unfortunately, when I'm trying to stabilize the video with vidstab, it seems that I have the following error:
ChildProcessError: `ffmpeg -i /tmp/1628240712871_edited.mp4 -vf vidstabdetect=result=transforms.trf -an -f null -` failed with code 1
I think it's because libvidstab is not activated with FFmpeg, as stated below:
To enable compilation of this filter, you need to configure FFmpeg with
--enable-libvidstab.
Do you have any idea how I can activate/use it?
Thank you in advance

For your information, I achieved to do it.
I had to upload my own binary. For that, I just downloaded a pre-compiled Debian-based Linux environment binary (here, but you can have your own anywhere else ;) ) and put it inside the functions directory.
After that, I just deployed the functions, as usual.
This would result in the functions being deployed along with the binary.
I can now call my own binary with a command like this:
import { spawn } from 'child-process-promise';
...
await spawn('./ffmpeg', [
// my commands
]);
I hope it helps ;)

Related

Why does deno's exec package execute commands without outputting messages to the terminal?

import { exec } from "https://deno.land/x/exec/mod.ts";
await exec(`git clone https://github.com/vuejs/vue.git`)
when i use git clone https://github.com/vuejs/vue.git in .sh file, It has message in terminal,
but don't have in deno
First, I think it is important to echo what jsejcksn commented:
The exec module is not related to Deno. All modules at https://deno.land/x/... are third-party code. For working with your shell, see Creating a subprocess in the manual.
Deno's way of doing this without a 3rd party library is to use Deno.run.
With that said, if you take a look at exec's README you'll find documentation for what you're looking for under Capture the output of an external command:
Sometimes you need to capture the output of a command. For example, I do this to get git log checksums:
import { exec } from "https://deno.land/x/exec/mod.ts";
let response = await exec('git log -1 --format=%H', {output: OutputMode.Capture});
If you look at exec's code you'll find it uses Deno.run under the hoods. If you like exec you can use it but you might find you like using Deno.run directly instead.

Get Base URL of Current Jupyter Server IPython is Connected To

I'd like to be able to know the base URL of the Jupyter Notebook server IPython is presently connected to. I'm aware of the notebook.notebookapp.list_running_servers() function which produces output like:
[
{
'base_url': '/',
'hostname': 'localhost',
'notebook_dir': '/home/username/dir-notebook-was-spawned-in',
'password': False,
'pid': 368094,
'port': 8888,
'secure': False,
'sock': '',
'token': '4e7e860527d5333306cb06c594aa2167a7d375294f96c2d9',
'url': 'http://localhost:8888/',
},
...
]
This feels tantalizingly close to what I want since there's a base_url key there, however I don't know how to determine which server in the list is where IPython is actually connected to. The closest approximation I've been able to come up with is to see which server's notebook_dir key most closely matches os.getcwd() but this is obviously imperfect.
Further Findings:
I've now realized that notebook.notebookapp.list_running_servers() is not the right way to go about this because the notebook server and the kernel are not guaranteed to be running in the same place and in that case the function would always return an empty list.
There was some discussion about it in ipyleaflet. You cannot get the base URL only from the back-end, the way we do it in ipyleaflet is to get it in the front-end using window.location.href.
On the command line you can use the following:
jupyter notebook list --json | python3 -c 'import json; import sys; print(json.load(sys.stdin)["base_url"])'
(Remove the ["base_url"] part from that command to see the full dictionary).
In Python, there is base url listed among the output of:
import psutil
psutil.Process().parent().cmdline()
These are derived from discussions on the Jupyter Discourse Forum here and here.
For Binder sessions, it was pointed out here to use the following to get a good listing of details:
env | grep -i jupyter

How to call a Meteor method from the command line

I have a Meteor app and want to call a server method from the command line, so that I can write a bash script to perform scheduled operations.
Is there any way to either call a method directly, or submit a form which will then trigger server-side code?
I've tried using curl to call a method, but either it's not possible or I'm missing something basic. This doesn't work:
curl "http://localhost:3000/Meteor.call('myMethod')"
nor does:
curl -s -d "http://localhost:3000/imports/api/test.js" > out.html
where test.js:
var test = function(){
console.log('hello');
}
I thought of using a form but I can't think how to create a submit event because the Meteor client uses template events that then call server methods.
I'll be very grateful for any help! This feels like it should be a simple thing but has me stumped.
Edit: I've also tried phantomjs and slimerjs as run through casperjs.
phantomjs is no longer maintained and generates an error:
TypeError: Attempting to change the setter of an unconfigurable property.
https://github.com/casperjs/casperjs/issues/1935
slimerjs errors with Firefox 60 and I can't figure out how to 'downgrade' back to the supported 59, and the option to disable automatic updates of Firefox no longer seems to exist. The error is:
c is undefined
https://github.com/laurentj/slimerjs/issues/694
You could make use of the node ddp package to call the Meteor method in an own js file that you created with a specific script. From there you can pipe all outs to wherever you want.
Let's assume the following Meteor method:
Meteor.methods({
'myMethod'() {
console.log("hello console")
return "hello result"
}
})
The upcoming steps will let you call this method from another shell, assuming your Meteor application is running.
1. Install ddp in your global npm directory
$ meteor npm install -g ddp
2. Create the script to call your method in your test directory
$ mkdir -p ddptest
$ cd ddptest
$ touch ddptest.js
Place the ddp script code into the file with the editor or command of your choice.
(The follwing code is freely taken from the package's readme. Feel free to configure to your needs.)
ddptest/ddptest.js
var DDPClient = require(process.env.DDP_PATH);
var ddpclient = new DDPClient({
// All properties optional, defaults shown
host : "localhost",
port : 3000,
ssl : false,
autoReconnect : true,
autoReconnectTimer : 500,
maintainCollections : true,
ddpVersion : '1', // ['1', 'pre2', 'pre1'] available
// uses the SockJs protocol to create the connection
// this still uses websockets, but allows to get the benefits
// from projects like meteorhacks:cluster
// (for load balancing and service discovery)
// do not use `path` option when you are using useSockJs
useSockJs: true,
// Use a full url instead of a set of `host`, `port` and `ssl`
// do not set `useSockJs` option if `url` is used
url: 'wss://example.com/websocket'
});
ddpclient.connect(function(error, wasReconnect) {
// If autoReconnect is true, this callback will be invoked each time
// a server connection is re-established
if (error) {
console.log('DDP connection error!');
console.error(error)
return;
}
if (wasReconnect) {
console.log('Reestablishment of a connection.');
}
console.log('connected!');
setTimeout(function () {
/*
* Call a Meteor Method
*/
ddpclient.call(
'myMethod', // namyMethodme of Meteor Method being called
['foo', 'bar'], // parameters to send to Meteor Method
function (err, result) { // callback which returns the method call results
console.log('called function, result: ' + result);
ddpclient.close();
},
function () { // callback which fires when server has finished
console.log('updated'); // sending any updated documents as a result of
console.log(ddpclient.collections.posts); // calling this method
}
);
}, 3000);
});
The code assumes that your app runs on localhost:3000, note that there is no conncection close on errors or undesired behavior.
As you can see at the top, the file imports your globally installed ddp package. Now in order to get it's path without using additional tools, just pass an environment variable (process.env.DDP_PATH) and let your shell handle the path resolving.
In order to get the installation path you can use npm root with the global flag.
Finally call your script via:
$ DDP_PATH=$(meteor npm root -g)/ddp meteor node ddptest.js
Which will give you the following output:
connected!
updated
undefined
called function, result: hello result
And logs hello console to the open session that is running your meteor app.
Edit: A note on using this in production
If you want to use this script in production you have to use the shell commands without the meteor command but using your installation of node and npm.
If you get in trouble with paths use process.execPath to find your node binary and npm root -g to find your global npm modules.
You can check out this documentation: Command Line | meteor shell.
While your meteor app is running, you can execute meteor shell to start an interactive console. In the console, you can do Meteor.call(...).
So if you want to write a script with using meteor shell, you might need to pipe the script file for meteor shell. Like,
$ meteor shell < script_file
See also the answer of "How can I pipe a command into the meteor shell?"

Can Ansible unarchive be made to write static folder modification times?

I am writing a build process for a WordPress installation using Ansible. It doesn't have a application-level build system at the moment, and I've chosen Ansible so that it can cleanly integrate with server build scripts, so I can bring up a working server at the touch of a button.
Most of my WordPress plugins are being installed with the unarchive feature, pointing to versioned plugin builds on the official wordpress.org installation server. I've encountered a problem with just one of these, which is that it is always being marked as "changed" even though the files are exactly the same.
Having examined the state of ls -Rl before and after, I noticed that this plugin (WordPress HTTPS) is the only one to use internal sub-directories, and upon each decompression, the modification time of folders is getting bumped.
It may be useful to know that this is a project build script, with a connection of local. I guess therefore that means that SSH is not being used.
Here is a snippet of my playbook:
- name: Install the W3 Total Cache plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/w3-total-cache.0.9.4.1.zip
dest=wp-content/plugins
copy=no
- name: Install the WP DB Manager plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/wp-dbmanager.2.78.1.zip
dest=wp-content/plugins
copy=no
# #todo Since this has internal sub-folders, need to work out
# how to preserve timestamps of the original folders rather than
# re-writing them, which forces Ansible to record a change of
# server state.
- name: Install the WordPress HTTPS plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/wordpress-https.3.3.6.zip
dest=wp-content/plugins
copy=no
One hacky way of fixing this is to use ls -R before and after, using options to include file sizes but not timestamps, and then md5sum that output. I could then mark it as changed if there is a change in checksum. It'd work but it's not very elegant (and I'd want to do that for all plugins, for consistency).
Another approach is to abandon the task if a plugin file already exists, but that would cause problems when I bump the plugin version number to the latest copy.
Thus, ideally, I am looking for a switch to present to unarchive to say that I want the folder modification times from the zip file, not from playbook runtime. Is it possible?
Update: a commenter asked if the file contents could have changed in any way. To determine whether they have, I wrote this script, which creates a checksum for (1) all file contents and (2) all file/directory timestamps:
#!/bin/bash
# Save pwd and then change dir to root location
STARTDIR=`pwd`
cd `dirname $0`/../..
# Clear collation file
echo > /tmp/wp-checksum
# List all files recursively
find wp-content/plugins/wordpress-https/ -type f | while read file
do
#echo $file
cat $file >> /tmp/wp-checksum
done
# Get checksum of file contents
sha1sum /tmp/wp-checksum
# Get checksum of file sizes
ls -Rl wp-content/plugins/wordpress-https/ | sha1sum
# Go back to original dir
cd $STARTDIR
I ran this as part of my playbook (running it in isolation using tags) and received this:
PLAY [Set this playbook to run locally] ****************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [jonblog : Run checksum command] ******************************************
changed: [localhost]
TASK [jonblog : debug] *********************************************************
ok: [localhost] => {
"checksum_before.stdout_lines": [
"374fadc4df1578f78fd60b1be6758477c2c533fa /tmp/wp-checksum",
"10d66f7bdbbdd3af531d1b11a3db3059a5868838 -"
]
}
TASK [jonblog : Install the WordPress HTTPS plugin] ***************
changed: [localhost]
TASK [jonblog : Run checksum command] ******************************************
changed: [localhost]
TASK [jonblog : debug] *********************************************************
ok: [localhost] => {
"checksum_after.stdout_lines": [
"374fadc4df1578f78fd60b1be6758477c2c533fa /tmp/wp-checksum",
"719c9da94b525e723b1abe188ee9f5bbaf121f3f -"
]
}
PLAY RECAP *********************************************************************
localhost : ok=6 changed=3 unreachable=0 failed=0
The debug lines reflect the checksum hash of the contents of the files (this is identical) and then the checksum hash of ls -Rl of the file structure (this has changed). This is in keeping with my prior manual finding that directory checksums are changing.
So, what can I do next to track down why folder modification times are incorrectly flagging this operation as changed?
Rather than overwriting all files each time and find a way to keep the same modification datetime, you may want to use the creates option of the unarchive module.
As you maybe already know, this tells Ansible that a specific file/folder will be created as a result of the task. Thus, next time the task will not be run again if that file/folder already exists.
See http://docs.ansible.com/ansible/unarchive_module.html#options
My solution is to modify the checksum script and to make that a permanent feature of the Ansible process. It feels a bit hacky to do my own checksumming, when Ansible should do it for me, but it works.
New answers that explain that I am doing something wrong, or that a new version of Ansible fixes the problem, would be most welcome.
If I get a moment, I will raise this as a possible bug with the Ansible team. However I do sometimes wonder about the effort/reward ratio when raising bugs on a busy tracker - I already have one item outstanding, it has been waiting a while, and I've chosen to work around that too.
Update (18 months later)
This Ansible build system never made it into live. It felt like I was always working around something. Recently, when I decided I needed to move my blog to another server, I finally Dockerised it. This took several weeks (since there is a surprising amount of things to think about in a real WordPress installation) but in general I found the process much nicer than using orchestration tools.

Libraries and samples to deploy an OCaml application to an Nginx server

I have developed a desktop application in OCaml under Ubuntu.
Now, I would like to deploy it to a DigitalOcean Ubuntu server (512 MB Memory / 20 GB Disk) that I own. I will use JavaScript programs on the client side to call the executable stored on the server side, then deal with the returned results.
However, I have no idea how to get started.
Someone pointed me to FastCGI, I did see some FastCGI settings in Nginx server. It seems that there are some OCaml libraries to handle FastCGI or CGI: ocamlnet, cgi, CamlGI, etc.
Could anyone tell me which library is stable and suits my need?
Besides, are there some samples of the library and the settings in Nginx server to let me get started?
I don't think the solution I will propose is the less heavy, but it has several advantages :
It allows you to generate the website in Ocaml, so that the interface with your code won't be to hard to do
If needed you will be able to export your whole application directly in Javascript : you won't let your serveur do useless computes that the user could do, and moreover you don't need to rewrite your code in Javascript, Ocsigen can convert your code for you
If some operations need to be performed by the serveur, you can really easily call server side functions from the client side code, and all your code will be written in Ocaml.
It's pretty easy
What is this amazing tool ? Ocsigen ! You can find a complete tutorial here.
Now let's see how you can use it
Install Ocsigen
First if you don't have it, install opam (it will allow you to install ocaml packages). Just follow the instructions on the website (I cannot paste link since I don't have enough reputation points), but basically for ubuntu run :
sudo add-apt-repository ppa:avsm/ppa
sudo apt-get update
sudo apt-get install ocaml ocaml-native-compilers camlp4-extra m4 opam
Then, you need to install Ocsigen. All instructions are here : https://ocsigen.org/install but basically just do :
sudo aptitude install libev-dev libgdbm-dev libncurses5-dev libpcre3-dev libssl-dev libsqlite3-dev libcairo-ocaml-dev m4 opam camlp4-extra
opam install eliom
(Note : you can also install it with apt-get if you don't want to install/use opam, but I prefer using opam to deal with ocaml depends, you can choose a precise version...)
Well it's done, you now have installed ocsigen !
Create the web page
Then to create a basic scaffold site just run
eliom-distillery -name mysite -template basic -target-directory mysite
and to run it :
cd mysite/
make test.byte
You should see a basic page at localhost:8080/.
Now, let's insert your code. Let's imagine it is named myscript and return a string :
let myscript () = "Here is my amazing result"
Add this code before the let () = in the file mysite.eliom, and add just after h2 [pcdata "Welcome from Eliom's distillery!"]; the line :
p [pcdata (Printf.sprintf "My script gives the return function : \"%s\"" (myscript ()))]
This will create a paragraphe (p) whose content (pcdata) contains the result of myscritpt.
For me the whole mysite.eliom gives :
{shared{
open Eliom_lib
open Eliom_content
open Html5.D
}}
module Mysite_app =
Eliom_registration.App (
struct
let application_name = "mysite"
end)
let main_service =
Eliom_service.App.service ~path:[] ~get_params:Eliom_parameter.unit ()
let myscript () = "Here is my amazing result"
let () =
Mysite_app.register
~service:main_service
(fun () () ->
Lwt.return
(Eliom_tools.F.html
~title:"mysite"
~css:[["css";"mysite.css"]]
Html5.F.(body [
h2 [pcdata "Welcome from Eliom's distillery!"];
p [pcdata (Printf.sprintf "My script gives the return function : \"%s\"" (myscript ()))]
])))
(Please note that let application_name = "mysite" must follow the name that you gave to eliom-distillery. If it's not the case your javascript won't be linked)
Let's compile again :
make test.byte
Now at localhost:8080 you can read :
My script gives the return function : "Here is my amazing result"
The result of the script has been included !
Going further
You can also define myscript to be run in client side, take some Post/Get param, or communicate with the page in real time in only a few lines, but if you want to learn more about that just read the ocsigen tutorial !
Interface with Nginx
I'm not sure you really need to interface it with Nginx, since ocsigenserver is supposed to run as an (http) serveur, but if needed you can always put ocsigenserver behing a Nginx serveur by using reverse-proxy (or the reverse thing, you can serve Nginx from ocsigenserver, read the ocsigenserver manual for more details).

Resources