I am using the following commands for the silent installation of Maria DB on ubuntu 14.04.
export DEBIAN_FRONTEND=noninteractive
sudo debconf-set-selections <<< 'mariadb-server-10.0 mysql-server/root_password password PASS'
sudo debconf-set-selections <<< 'mariadb-server-10.0 mysql-server/root_password_again password PASS'
sudo apt-get install -y mariadb-server
But it always prompt the following content in a window
────────────────────┤ Configuring mariadb-server-5.5 ├────────────────────┐
│ │
│ MariaDB is a drop-in replacement for MySQL. It will use your current │
│ configuration file (my.cnf) and current databases. │
│ │
│ Note that MariaDB has some enhanced features, which do not exist in │
│ MySQL and thus migration back to MySQL might not always work, at least │
│ not as automatically as migrating from MySQL to MariaDB. │
│ │
│ Really migrate to MariaDB? │
│ │
│
[Yes] [No] │
Could anyone help me in this regard?
I had a similar problem, I was trying to create a provision script for vagrant to install and configure MariaDB, to solve the problem I set the variable DEBIAN_FRONTEND to noninteractive and used sudo's -E option [1] to load all existing variables:
export DEBIAN_FRONTEND=noninteractive
sudo -E apt-get install -y mariadb-server
[1]
-E, --preserve-env
Indicates to the security policy that the user wishes to preserve their existing environment variables. The security policy may return
an error if the user does not have permission to preserve the
environment.
Related
I run R Studio Server in a Redhat Linux virtual machine. I am trying to install the rgdal library in R with no success, and after reading many posts here on SO the proper fix to this issue is to install the following software on Linux:
yum install gdal-devel
yum install proj-devel
yum install proj-nad
yum install proj-epsg
but when I tried to do this I get the following error:
Loaded plugins: langpacks, rhnplugin
This system is receiving updates from Spacewalk server.
https://packages.microsoft.com/rhel/7/prod/repodata/repomd.xml: [Errno 14] curl#35 - "Encountered end of file"
Trying other mirror.
addons7 | 1.1 kB 00:00:00
base_7.4 | 1.3 kB 00:00:00
cliente7 | 871 B 00:00:00
epel_7 | 1.3 kB 00:00:00
latest7 | 1.3 kB 00:00:00
mysql7_57 | 871 B 00:00:00
remi7_55 | 871 B 00:00:00
remi7_56 | 871 B 00:00:00
remi7_safe | 871 B 00:00:00
softwarecollections7 | 1.1 kB 00:00:00
One of the configured repositories failed (Unknown),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
1. Contact the upstream for the repository and get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
3. Run the command with the repository temporarily disabled
yum --disablerepo=<repoid> ...
4. Disable the repository permanently, so yum won't use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage:
yum-config-manager --disable <repoid>
or
subscription-manager repos --disable=<repoid>
5. Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
failed to retrieve repodata/primary.xml.gz from softwarecollections7
error was [Errno 14] curl#18 - "transfer closed with 954471 bytes remaining to read"
How can I resolve this? For instance, there is a suggestion of disabling the repo, but I don't know what I have to put in the "..." portion.
I have these JSON files in a large directory structure. Some are just "abc.json" and some the added ".finished". I want to rsync only the files without ".finished".
$ find
.
./a
./a/abc.json.finished
./a/abc.json <-- this file
./a/index.html
./a/somefile.css
./b
./b/abc.json.finished
./b/abc.json <-- this file
Sample rsync command that copies all the "abc.json" AND the "abc.json.finished". I just want the "abc.json".
$ rsync --exclude="finished" --include="*c.json" --recursive \
--verbose --dry-run . server:/tmp/rsync
sending incremental file list
created directory /tmp/rsync
./
a/
a/abc.json
a/abc.json.finished
a/index.html
a/somefile.css
b/
b/abc.json
b/abc.json.finished
sent 212 bytes received 72 bytes 113.60 bytes/sec
total size is 0 speedup is 0.00 (DRY RUN)
Update: Added more files to the folders. HTML files, CSS and other files are present in my scenario. Only files ending in "c.json" should be transferred.
Scenario can be recreated with the following commands:
mkdir a
touch a/abc.json.finished
touch a/abc.json
touch a/index.html
touch a/somefile.css
mkdir b
touch b/abc.json.finished
touch b/abc.json
Try the following command. It assumes that you also want to replicate the source directory tree, (for any directories containing files which end with c.json), in the destination location:
$ rsync --include="*c.json" --exclude="*.*" --recursive \
--verbose --dry-run . server:/tmp/rsync
Explanation of command:
--include="*c.json" includes only assets whose name ends with c.json
--exclude="*.*" excludes all other assets (i.e. assets whose name includes a dot .)
--recursive recurse into directories.
--verbose log the results to the console.
--dry-run shows what would have been copied, without actually copying the files. This option/flag should be omitted to actually perform the copy task.
. the path to the source directory.
server:/tmp/rsync the path to the destination directory.
EDIT: Unfortunately, the command provided above also copies files whose filename does not include a dot character. To avoid this consider utlizing both rsync and find as follows:
$ rsync --dry-run --verbose --files-from=<(find ./ -name "*c.json") \
./ server:/tmp/rsync
This utilizes process substitution, i.e. <(list), to pass the output from the find command to the --files-from= option/flag of the rsync command.
source tree
.
├── a
│ ├── abc.json
│ ├── abc.json.finished.json
│ ├── index.html
│ └── somefile.css
└── b
├── abc.json
└── abc.json.finished.json
resultant destination tree
server
└── tmp
└── rsync
├── a
│ └── abc.json
└── b
└── abc.json
A hacky solution is use grep and create a file containing all file names we want to transfer.
find |grep "c.json$" > rsync-files
rsync --files-from=rsync-files --verbose --recursive --compress --dry-run \
./ \
server:/tmp/rsync
rm rsync-files
Content of 'rsync-files':
./a/abc.json
./b/abc.json
Output when running rsync command:
sending incremental file list
created directory /tmp/rsync
./
a/
a/abc.json
b/
b/abc.json
Follow this document http://docs.sulu.io/en/latest/book/getting-started.html at the end of installation process I got this error:
Target: cache
cache:clear ({"--no-optional-warmers":true,"--no-debug":true,"--no-interaction":true})
// Clearing the admin cache for the dev environment with debug true
[Symfony\Component\Filesystem\Exception\IOException]
Failed to remove directory "/home/vagrant/Code/sulu/var/cache/admin/de~/doctrine": .
sulu:build [-D|--nodeps] [--destroy] [-h|--help] [-q|--quiet] [-v|vv|vvv|--verbose] [-V|--version] [--ansi] [--no-ansi] [-n|--no-interaction] [-s|--shell] [--process-isolation] [-e|--env ENV] [--no-debug] [--] <command> [<target>]
Before that I was trying to set up file permissions with this:
HTTPDUSER=`ps axo user,comm | grep -E '[a]pache|[h]ttpd|[_]www|[w]ww-data|[n]ginx' | grep -v root | head -1 | cut -d\ -f1`
sudo setfacl -R -m u:"$HTTPDUSER":rwX -m u:`whoami`:rwX var/cache var/logs var/uploads var/uploads/* web/uploads web/uploads/* var/indexes var/sessions
sudo setfacl -dR -m u:"$HTTPDUSER":rwX -m u:`whoami`:rwX var/cache var/logs var/uploads var/uploads/* web/uploads web/uploads/* var/indexes var/sessions
and also had these errors:
setfacl: web/uploads: Operation not supported
setfacl: web/uploads/media: Operation not supported
setfacl: web/uploads/media: Operation not supported
My host OS : Ubuntu 16.04
Vagrant : v.1.9.3
VirtualBox : v.5
Homestead: v.5.2.1
Does anyone have successful installation Sulu CMS with Homestead?
What is the option to solve these my issues?
Sulu CMS looks very promising but unfortunately I could not install it locally still with many attempts.
UPDATE
After comment of Daniel I've tried other way to install Sulu but again got error at the very end of installation:
Executing builders
==================
Target: cache
cache:clear ({"--no-optional-warmers":true,"--no-debug":true,"--no-interaction":true})
// Clearing the admin cache for the dev environment with debug true
[Symfony\Component\Filesystem\Exception\IOException]
Failed to remove directory "/home/vagrant/Code/sulu/app/cache/admin/de~/annotations": .
Opened test.app in browser I see this error:
Fatal error: Uncaught
Symfony\Component\Debug\Exception\FatalThrowableError: Call to a member function getLocale() on null in /home/vagrant/Code/sulu/vendor/sulu/sulu/src/Sulu/Bundle/WebsiteBundle/Twig/Content/ContentPathTwigExtension.php on line 70
Symfony\Component\Debug\Exception\FatalThrowableError: Call to a member function getLocale() on null in /home/vagrant/Code/sulu/vendor/sulu/sulu/src/Sulu/Bundle/WebsiteBundle/Twig/Content/ContentPathTwigExtension.php on line 70
Tried to delete cache folders manually - the same error.
All console commands work fine.
Any ideas?
P.S. I have a good experience of installation other Symfony-based applications under Homestead and all went smooth basically (Sylius, eZ etc.). So I am very surprised....
To get around this, you need to remove this folder from the host machine, because it's an artifact from the previous process of trying to set things up and it's in god-mode, created by a no-longer existing almighty entity.
vagrant destroy (needed to release filesystem locks)
Remove folder on host: rm -rf .../sulu/app/cache/admin
type: nfs on folder binding
vagrant up
Should work then, encountered this same problem recently.
If you're using Homestead Improved, use the built-in "sulu" project type for a decent Nginx auto-setup:
sites:
- map: homestead.app
to: /home/vagrant/Code/Project/web
type: sulu
Here is what I did on my mac:
brew install npm
sudo -H npm install -g meteorite
The outcome is:
$ sudo -H npm install -g meteorite
Password:
npm http GET https://registry.npmjs.org/meteorite
....
> meteorite#0.6.13 postinstall /usr/local/share/npm/lib/node_modules/meteorite
> sh ./completions/postinstall.sh
npm WARN package.json node#0.0.0 No repository field.
meteorite#0.6.13 /usr/local/share/npm/lib/node_modules/meteorite
├── colors#0.6.0-1
├── underscore#1.5.2
├── wrench#1.5.1
├── fstream#0.1.24 (inherits#2.0.1, graceful-fs#2.0.1, rimraf#2.2.2, mkdirp#0.3.5)
├── optimist#0.6.0 (wordwrap#0.0.2, minimist#0.0.5)
├── ddp#0.3.4 (meteor-ejson#0.6.3, ws#0.4.31)
└── prompt#0.2.11 (revalidator#0.1.5, pkginfo#0.3.0, read#1.0.5, utile#0.2.0, winston#0.6.2)
But when I type mrt,it shows
$ mrt
-bash: mrt: command not found
My node and nom version are :
$ node -v
v0.10.12
$ npm -v
1.2.32
Anyone help appreciated.
I just found the reason and solution: the mrt in ~/node_modules/meteorite/.bin/ should be included in /usr/local/bin or somewhere in your $PATH environment.So do this:
cp ~/node_modules/meteorite/.bin/mrt /usr/local/bin/
When I type mrt,it works:)
Can PowerShell on Windows by itself or using simple shell script, list files and directory this way: (or using Mac OS X or Ubuntu's shell script)
audio
mp3
song1.mp3
some other song.mp3
audio books
7 habits.mp3
video
samples
up.mov
cars.mov
Unix's ls -R or ls -lR can't seem to list it in a tree structure unfortunately.
You can use tree.com for listing like indented like shown above. Note that tree.com only works with the filesystem. If you ever have a need to display structure for other providers like WSMan or RegEdit, you can use the Show-Tree function that comes with the PowerShell Community Extensions.
In Linux, you can use:
ls -R directory | grep ":$" | sed -e 's/:$//' -e 's/[^-][^\/]*\//--/g' -e 's/^/ /' -e 's/-/|/'
or for the current directory:
ls -R | grep ":$" | sed -e 's/:$//' -e 's/[^-][^\/]*\//--/g' -e 's/^/ /' -e 's/-/|/'
You can put this "small" command in a script: look here
you can use Unix's tree command, or if you are on Windows, the GNU windows tree.
Windows has a tree command:
C:\folder>tree . /F
Folder PATH listing for volume sys
Volume serial number is F275-CBCA
C:\FOLDER.
│ file01.txt
│
├───Sub folder
│ chart-0001.png
│ chart-0002.png
└───────chart-0004.png
The /F parameter is what tells it to show files. You can execute this from Powershell
This is probably what you're looking for:
ls -R | tree
It's not installed by default on Ubuntu. So, to install it:
sudo apt-get install tree