i am trying to set up laravel homestead on windows 10 and i am unable to navigate into my projects folder.This is my Homestead.yaml `---
ip: "192.168.10.10"
memory: 2048
cpus: 2
provider: virtualbox
authorize: ~/.ssh/id_rsa.pub
keys:
- ~/.ssh/id_rsa
folders:
- map: C:/LaravelHome
to: /home/vagrant/LaravelHome
sites:
- map: homestead1.test
to: /home/vagrant/LaravelHome/homestead1/public
databases:
- homestead
features:
- mysql: true
- mariadb: false
- postgresql: false
- ohmyzsh: false
- webdriver: false
so everytime i try to cd into LaravelHome so as to set up a project,i get this feedback even though i have the directory on my C drivevagrant#vagrant:~$ pwd
/home/vagrant
vagrant#vagrant:~$ cd LaravelHome
-bash: cd: LaravelHome: No such file or directory
vagrant#vagrant:~$
` What could i be doing wrong
it should be...
folders:
- map: C:\LaravelHome
to: /home/vagrant/code
the slash is in wrong direction...
You may refer to my step-by-step guide
https://medium.com/#dogcomp/ec996f9a2cb6
Related
I'm trying to follow along this blog about using Docker with R.
I followed basic Docker set up steps and am able to run the hello world image.
I'm on a old 2009 Mac and had to use Docker Toolbox.
I'm in a place with weak internet connection and am using a personal hotspot.
Each time I try to run docker run --rm -p 8787:8787 rocker/verse I wait for a few minutes and see a downloading message, then I get a message "docker: unauthorized: authentication required."
I found this separate documentation which advised me to add a password:
docker run --rm -p 8787:8787 -e PASSWORD=blah rocker/rstudio
But I got the same result "docker: unauthorized: authentication required."
I did some Google searching and found some posts both here on SO and on Github but was unable to identify what is causing this error in my specific case.
I suspect my weak internet connection might have something to do with it since I seem to be able to download for about 10 or 15 minutes before seeing this message.
Here is Docker info:
Macs-MacBook:~ macuser$ docker info
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 2
Server Version: 18.09.6
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: bb71b10fd8f58240ca47fbb579b9d1028eea7c84
runc version: 2b18fe1d885ee5083ef9f0838fee39b62d653e30
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.14.116-boot2docker
Operating System: Boot2Docker 18.09.6 (TCL 8.2.1)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.951GiB
Name: default
ID: XMCE:OBLV:CKEX:EGIB:PHQ7:MLHF:ZJSA:PGYN:OIMM:JI67:ETCI:JKBH
Docker Root Dir: /mnt/sda1/var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
provider=virtualbox
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Does anyone know where I can look to next in order to be able to pull and or run the rocker image?
I'm trying to understand what's wrong with my config that I must specify saltenv=base when running sudo salt '*' state.highstate saltenv=base. If I run the high state without specifying the saltenv, I get the error message:
No Top file or master_tops data matches found.
Running salt-call cp.get_file_str salt://top.sls on the minion or master pulls back the right top.sls file. Here's a snippet of my top.sls:
base:
# All computers including clients and servers
'*':
- states.schedule_highstate
# any windows machine server or client
'os:Windows':
- match: grain
- states.chocolatey
Also, I can run any state that's in the same directory or subdirectory as the top.sls without specifying the saltenv=. with sudo salt '*' state.apply states.(somestate).
While I do have base specified in /etc/salt/master like this:
file_roots:
base:
- /srv/saltstack/salt/base
There is nothing in filesystem on the Salt master. All of the salt and pillar files are coming from GitFS. Specifying the saltenv= does grab from the correct corresponding git branch, with the master branch responding to saltenv=base or no saltenv specified when doing state.apply (that works).
gitfs_remotes
- https://git.asminternational.org/SaltStack/salt.git:
- user: someuser
- password: somepassword
- ssl_verify: False
.
.
.
ext_pillar:
- git:
- master https://git.asminternational.org/SaltStack/pillar.git:
- name: base
- user: someuser
- password: somepassword
- ssl_verify: False
- env: base
- dev https://git.asminternational.org/SaltStack/pillar.git:
- name: dev
- user: someuser
- password: somepassword
- ssl_verify: False
- env: dev
- test https://git.asminternational.org/SaltStack/pillar.git:
- name: test
- user: someuser
- password: somepassword
- ssl_verify: False
- env: test
- prod https://git.asminternational.org/SaltStack/pillar.git:
- name: prod
- user: someuser
- password: somepassword
- ssl_verify: False
- env: prod
- experimental https://git.asminternational.org/SaltStack/pillar.git:
- user: someuser
- password: somepassword
- ssl_verify: False
- env: experimental
The behavior is so inconsistent where it can't find top.sls unless specifying the saltenv, but running states is fine without saltenv=.
Any ideas?
After more debugging I found the answer. One of the other environment top.sls files was malformed and causing an error. When specifying saltenv=base, none of the other top files are evaluated, which is why it worked. After I verified ALL of the top.sls files from all the environments things behaved as expected.
Note to self, verify all the top files, not just the one you are working on.
I need to access custom grains in my config files using Jinja templating. Here are my files.
[root#localhost salt]# cat my_config.conf
{{ grains['ip'] }}
[root#localhost salt]# cat test_jinja.sls
/root/my_config.conf:
file.managed:
- source: salt://my_config.conf
- user: root
- group: root
- mode: '0644'
- makedirs: True
- force: True
- template: jinja
[root#localhost salt]# salt-ssh 'my-ip' state.sls test_jinja
10.225.253.134:
----------
ID: /root/test
Function: file.managed
Result: False
Comment: Unable to manage file: Jinja variable 'dict object' has no attribute 'ip'
Started: 12:57:49.301697
Duration: 33.039 ms
Changes:
[root#localhost salt]# cat /etc/salt/roster
my-ip: # The id to reference the target system with
host: xx.xx.xx.133 # The IP address or DNS name of the remote host
user: root # The user to log in as
passwd: teledna # The password to log in with
grains:
ip: 'xx.xx.xx.133'
How to access the grains in the config files using salt-ssh???
This looks like this is a bug in salt, where the grains from the roster aren't shipped over to the minion, can you try this PR?
https://github.com/saltstack/salt/pull/40775
The reason is that there is no 'ip' grain.
To list all grains use salt '*' grains.items
nginx
pkg.installed:
- name: nginx
service:
- name: nginx
- running
- enable: True
- watch:
- file: /etc/nginx/*
/etc/nginx:
file.recurse:
- source: salt://{{slspath}}/etc/nginx/
- include_empty: True
How can I make the above work?
I want to make it so that every time a new config is added in /etc/nginx/conf.d/newsite.conf nginx is reloaded.
Currently I can only achieve that if I manually add every conf in the sls in the manner:
/etc/nginx/conf.d/newsite.conf:
file.managed:
- source: salt://{{slspath}}/etc/nginx/conf.d/newsite.conf
Is there a way to automate it?
You can't watch a file change within a directory to execute a state. But you can watch a state result to do so. In your case, the following should restart nginx whenever a change is done by the /etc/nginx file state:
nginx
pkg.installed:
- name: nginx
service.running:
- enable: True
- watch:
- file: /etc/nginx
/etc/nginx:
file.recurse:
- source: salt://{{slspath}}/etc/nginx/
- include_empty: True
I'm trying to use CodeDeploy to deploy an application onto EC2 but I am facing the following error
Duplicate permission setting instructions for /var/www/html/storage/framework
My appspec.yml is below
version: 0.0
os: linux
files:
- source: /
destination: /var/www/html
permissions:
- object: /var/www/html
owner: apache
group: apache
mode: 644
except:
- storage/*
type:
- directory
- object: /var/www/html/storage
owner: apache
group: apache
mode: 777
type:
- directory
I've tried various formats for except including
Explicitly listing relative paths
except:
- storage
- storage/app
- storage/logs
- storage/framework
- storage/framework/views
- storage/framework/cache
- storage/framework/sessions
Using a wildcard
except:
- storage/*
Using just the folder name
except:
- storage
None of which seem to resolve the issue.
Similar questions
AWS CodeDeploy Duplicate permission
Duplicate permission setting instructions
The except option needs to be specified as an array. (Use [] not a nested list)
You can see it in the permissions examples in the appspec reference guide (scroll down a little): http://docs.aws.amazon.com/codedeploy/latest/userguide/app-spec-ref.html#app-spec-ref-permissions