I have developed a desktop application in OCaml under Ubuntu.
Now, I would like to deploy it to a DigitalOcean Ubuntu server (512 MB Memory / 20 GB Disk) that I own. I will use JavaScript programs on the client side to call the executable stored on the server side, then deal with the returned results.
However, I have no idea how to get started.
Someone pointed me to FastCGI, I did see some FastCGI settings in Nginx server. It seems that there are some OCaml libraries to handle FastCGI or CGI: ocamlnet, cgi, CamlGI, etc.
Could anyone tell me which library is stable and suits my need?
Besides, are there some samples of the library and the settings in Nginx server to let me get started?
I don't think the solution I will propose is the less heavy, but it has several advantages :
It allows you to generate the website in Ocaml, so that the interface with your code won't be to hard to do
If needed you will be able to export your whole application directly in Javascript : you won't let your serveur do useless computes that the user could do, and moreover you don't need to rewrite your code in Javascript, Ocsigen can convert your code for you
If some operations need to be performed by the serveur, you can really easily call server side functions from the client side code, and all your code will be written in Ocaml.
It's pretty easy
What is this amazing tool ? Ocsigen ! You can find a complete tutorial here.
Now let's see how you can use it
Install Ocsigen
First if you don't have it, install opam (it will allow you to install ocaml packages). Just follow the instructions on the website (I cannot paste link since I don't have enough reputation points), but basically for ubuntu run :
sudo add-apt-repository ppa:avsm/ppa
sudo apt-get update
sudo apt-get install ocaml ocaml-native-compilers camlp4-extra m4 opam
Then, you need to install Ocsigen. All instructions are here : https://ocsigen.org/install but basically just do :
sudo aptitude install libev-dev libgdbm-dev libncurses5-dev libpcre3-dev libssl-dev libsqlite3-dev libcairo-ocaml-dev m4 opam camlp4-extra
opam install eliom
(Note : you can also install it with apt-get if you don't want to install/use opam, but I prefer using opam to deal with ocaml depends, you can choose a precise version...)
Well it's done, you now have installed ocsigen !
Create the web page
Then to create a basic scaffold site just run
eliom-distillery -name mysite -template basic -target-directory mysite
and to run it :
cd mysite/
make test.byte
You should see a basic page at localhost:8080/.
Now, let's insert your code. Let's imagine it is named myscript and return a string :
let myscript () = "Here is my amazing result"
Add this code before the let () = in the file mysite.eliom, and add just after h2 [pcdata "Welcome from Eliom's distillery!"]; the line :
p [pcdata (Printf.sprintf "My script gives the return function : \"%s\"" (myscript ()))]
This will create a paragraphe (p) whose content (pcdata) contains the result of myscritpt.
For me the whole mysite.eliom gives :
{shared{
open Eliom_lib
open Eliom_content
open Html5.D
}}
module Mysite_app =
Eliom_registration.App (
struct
let application_name = "mysite"
end)
let main_service =
Eliom_service.App.service ~path:[] ~get_params:Eliom_parameter.unit ()
let myscript () = "Here is my amazing result"
let () =
Mysite_app.register
~service:main_service
(fun () () ->
Lwt.return
(Eliom_tools.F.html
~title:"mysite"
~css:[["css";"mysite.css"]]
Html5.F.(body [
h2 [pcdata "Welcome from Eliom's distillery!"];
p [pcdata (Printf.sprintf "My script gives the return function : \"%s\"" (myscript ()))]
])))
(Please note that let application_name = "mysite" must follow the name that you gave to eliom-distillery. If it's not the case your javascript won't be linked)
Let's compile again :
make test.byte
Now at localhost:8080 you can read :
My script gives the return function : "Here is my amazing result"
The result of the script has been included !
Going further
You can also define myscript to be run in client side, take some Post/Get param, or communicate with the page in real time in only a few lines, but if you want to learn more about that just read the ocsigen tutorial !
Interface with Nginx
I'm not sure you really need to interface it with Nginx, since ocsigenserver is supposed to run as an (http) serveur, but if needed you can always put ocsigenserver behing a Nginx serveur by using reverse-proxy (or the reverse thing, you can serve Nginx from ocsigenserver, read the ocsigenserver manual for more details).
Related
import { exec } from "https://deno.land/x/exec/mod.ts";
await exec(`git clone https://github.com/vuejs/vue.git`)
when i use git clone https://github.com/vuejs/vue.git in .sh file, It has message in terminal,
but don't have in deno
First, I think it is important to echo what jsejcksn commented:
The exec module is not related to Deno. All modules at https://deno.land/x/... are third-party code. For working with your shell, see Creating a subprocess in the manual.
Deno's way of doing this without a 3rd party library is to use Deno.run.
With that said, if you take a look at exec's README you'll find documentation for what you're looking for under Capture the output of an external command:
Sometimes you need to capture the output of a command. For example, I do this to get git log checksums:
import { exec } from "https://deno.land/x/exec/mod.ts";
let response = await exec('git log -1 --format=%H', {output: OutputMode.Capture});
If you look at exec's code you'll find it uses Deno.run under the hoods. If you like exec you can use it but you might find you like using Deno.run directly instead.
I'm using Firebase with the Cloud Functions feature and FFmpeg.
I see that FFmpeg is now included by default in the pre-installed packages, as you can see here.
This way, I can use it with a spawn command like this:
await spawn('ffmpeg', [
'-y',
'-i',
tempFilePath,
'-vf',
'transpose=2',
targetTempFilePath,
]);
and it works perfectly.
Unfortunately, when I'm trying to stabilize the video with vidstab, it seems that I have the following error:
ChildProcessError: `ffmpeg -i /tmp/1628240712871_edited.mp4 -vf vidstabdetect=result=transforms.trf -an -f null -` failed with code 1
I think it's because libvidstab is not activated with FFmpeg, as stated below:
To enable compilation of this filter, you need to configure FFmpeg with
--enable-libvidstab.
Do you have any idea how I can activate/use it?
Thank you in advance
For your information, I achieved to do it.
I had to upload my own binary. For that, I just downloaded a pre-compiled Debian-based Linux environment binary (here, but you can have your own anywhere else ;) ) and put it inside the functions directory.
After that, I just deployed the functions, as usual.
This would result in the functions being deployed along with the binary.
I can now call my own binary with a command like this:
import { spawn } from 'child-process-promise';
...
await spawn('./ffmpeg', [
// my commands
]);
I hope it helps ;)
I've finally managed to make my proxy settings work for GitHub cloning, using the following code :
options(rsconnect.http = "internal")
Sys.setenv(http_proxy = "http://proxy.lala.blabla:8080")
Sys.setenv(https_proxy = "https://proxy.lala.blabla:8080")
I can now clone github projects using File > New Project > Version control.
But I can't install from github :'(
require(devtools)
install_github("this/that")
--> Installation failed: Could not resolve host: raw.githubusercontent.com
People seem to use the following command :
http::set_config(use_proxy(...))
But that would force me to explicitely write my login / pass, which I don't want to do. I'd rather use the default ones that are associated to
options(rsconnect.http = "internal")
How can I configure the proxy here, without writing my login/pass please ?
devtools uses httr under the hoods, see e.g. devtools:::remote_package_name.github_remote or devtools:::remote_download.github_remote.
That is why it requires you to set the proxy in the httr::set_config(httr::use_proxy(...)) way.
I would suggest that you just pick up the information from the environment variables and pass the elements to httr::set_config(httr::use_proxy(...)). Then you don't need to type your settings in the code.
I am writing a build process for a WordPress installation using Ansible. It doesn't have a application-level build system at the moment, and I've chosen Ansible so that it can cleanly integrate with server build scripts, so I can bring up a working server at the touch of a button.
Most of my WordPress plugins are being installed with the unarchive feature, pointing to versioned plugin builds on the official wordpress.org installation server. I've encountered a problem with just one of these, which is that it is always being marked as "changed" even though the files are exactly the same.
Having examined the state of ls -Rl before and after, I noticed that this plugin (WordPress HTTPS) is the only one to use internal sub-directories, and upon each decompression, the modification time of folders is getting bumped.
It may be useful to know that this is a project build script, with a connection of local. I guess therefore that means that SSH is not being used.
Here is a snippet of my playbook:
- name: Install the W3 Total Cache plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/w3-total-cache.0.9.4.1.zip
dest=wp-content/plugins
copy=no
- name: Install the WP DB Manager plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/wp-dbmanager.2.78.1.zip
dest=wp-content/plugins
copy=no
# #todo Since this has internal sub-folders, need to work out
# how to preserve timestamps of the original folders rather than
# re-writing them, which forces Ansible to record a change of
# server state.
- name: Install the WordPress HTTPS plugin
unarchive: >
src=https://downloads.wordpress.org/plugin/wordpress-https.3.3.6.zip
dest=wp-content/plugins
copy=no
One hacky way of fixing this is to use ls -R before and after, using options to include file sizes but not timestamps, and then md5sum that output. I could then mark it as changed if there is a change in checksum. It'd work but it's not very elegant (and I'd want to do that for all plugins, for consistency).
Another approach is to abandon the task if a plugin file already exists, but that would cause problems when I bump the plugin version number to the latest copy.
Thus, ideally, I am looking for a switch to present to unarchive to say that I want the folder modification times from the zip file, not from playbook runtime. Is it possible?
Update: a commenter asked if the file contents could have changed in any way. To determine whether they have, I wrote this script, which creates a checksum for (1) all file contents and (2) all file/directory timestamps:
#!/bin/bash
# Save pwd and then change dir to root location
STARTDIR=`pwd`
cd `dirname $0`/../..
# Clear collation file
echo > /tmp/wp-checksum
# List all files recursively
find wp-content/plugins/wordpress-https/ -type f | while read file
do
#echo $file
cat $file >> /tmp/wp-checksum
done
# Get checksum of file contents
sha1sum /tmp/wp-checksum
# Get checksum of file sizes
ls -Rl wp-content/plugins/wordpress-https/ | sha1sum
# Go back to original dir
cd $STARTDIR
I ran this as part of my playbook (running it in isolation using tags) and received this:
PLAY [Set this playbook to run locally] ****************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [jonblog : Run checksum command] ******************************************
changed: [localhost]
TASK [jonblog : debug] *********************************************************
ok: [localhost] => {
"checksum_before.stdout_lines": [
"374fadc4df1578f78fd60b1be6758477c2c533fa /tmp/wp-checksum",
"10d66f7bdbbdd3af531d1b11a3db3059a5868838 -"
]
}
TASK [jonblog : Install the WordPress HTTPS plugin] ***************
changed: [localhost]
TASK [jonblog : Run checksum command] ******************************************
changed: [localhost]
TASK [jonblog : debug] *********************************************************
ok: [localhost] => {
"checksum_after.stdout_lines": [
"374fadc4df1578f78fd60b1be6758477c2c533fa /tmp/wp-checksum",
"719c9da94b525e723b1abe188ee9f5bbaf121f3f -"
]
}
PLAY RECAP *********************************************************************
localhost : ok=6 changed=3 unreachable=0 failed=0
The debug lines reflect the checksum hash of the contents of the files (this is identical) and then the checksum hash of ls -Rl of the file structure (this has changed). This is in keeping with my prior manual finding that directory checksums are changing.
So, what can I do next to track down why folder modification times are incorrectly flagging this operation as changed?
Rather than overwriting all files each time and find a way to keep the same modification datetime, you may want to use the creates option of the unarchive module.
As you maybe already know, this tells Ansible that a specific file/folder will be created as a result of the task. Thus, next time the task will not be run again if that file/folder already exists.
See http://docs.ansible.com/ansible/unarchive_module.html#options
My solution is to modify the checksum script and to make that a permanent feature of the Ansible process. It feels a bit hacky to do my own checksumming, when Ansible should do it for me, but it works.
New answers that explain that I am doing something wrong, or that a new version of Ansible fixes the problem, would be most welcome.
If I get a moment, I will raise this as a possible bug with the Ansible team. However I do sometimes wonder about the effort/reward ratio when raising bugs on a busy tracker - I already have one item outstanding, it has been waiting a while, and I've chosen to work around that too.
Update (18 months later)
This Ansible build system never made it into live. It felt like I was always working around something. Recently, when I decided I needed to move my blog to another server, I finally Dockerised it. This took several weeks (since there is a surprising amount of things to think about in a real WordPress installation) but in general I found the process much nicer than using orchestration tools.
As part of our chef infrastructure I'm trying to set up and configure a berks-api server. I have created an Ubuntu server in azure and i have bootstrapped it and it appears as a node in my chef-server.
I have followed the instructions at github - bekshelf-api installation to install the berks-api via a cookbook. I have run
sudo chef-client
on my node and the cookbook appears to have been run successfully.
The problem is that the berks-api doesn't appear to run. My Linux terminology isn't great so sorry if I'm making mistakes in what I say but it appears as if the berks-api service isn't able to run. If I navigate to /etc/service/berks-api and run this command
sudo berks-api
I get this error
I, [2015-07-23T11:56:37.490075 #16643] INFO -- : Cache manager starting...
I, [2015-07-23T11:56:37.491006 #16643] INFO -- : Cache Builder starting...
E, [2015-07-23T11:56:37.493137 #16643] ERROR -- : Actor crashed!
Errno::EACCES: Permission denied # rb_sysopen - /etc/chef/client.pem
/opt/berkshelf-api/v2.1.1/vendor/bundle/ruby/2.1.0/gems/ridley-4.1.2/lib/ridley/client.rb:144:in `read'
/opt/berkshelf-api/v2.1.1/vendor/bundle/ruby/2.1.0/gems/ridley-4.1.2/lib/ridley/client.rb:144:in `initialize'
If anyone could help me figure out what is going on, I'd really appreciate it. If you need to explain the setup any more let me know.
It turns out I misunderstood the configuration of the berks-api. I needed to get a new private key for my client (berkshelf) from manage.chef.io for our organization. I then needed to upload the new key (berkshelf.pem) to /etc/berkshelf/api-server and reconfigure the berks-api to use the new key. so my config for the berks-api now looks like below:
{
"home_path":"/etc/berkshelf/api-server",
"endpoints":[
{
"type":"chef_server",
"options":
{
"url":"https://api.opscode.com/organizations/my-organization",
"client_key":"/etc/berkshelf/api-server/berkshelf.pem",
"client_name":"berkshelf"
}
}
],
"build_interval":5.0
}
I couldn't upload berkshelf.pem directly to the target location, i had to upload it to my home location, then copy it from within linux.
Having done this, the service starts and works perfectly.