This is my 1st time using grunt. When I type grunt in the command line, I get the following error:
The file [Path]grunt.ps1 is not digitally signed. You cannot run this script on the current system. For
more information about running scripts and setting execution policy, see about_Execution_Policies at
https:/go.microsoft.com/fwlink/?LinkID=135170.
Can anyone help me in resolving this issue? Thank you!
When you run a .ps1 PowerShell script you might get the message saying “.ps1 is not digitally signed. The script will not execute on the system.”
To fix it you have to run the command below to run Set-ExecutionPolicy and change the Execution Policy setting in windows
run this in powershell with administrative privilege :
Set-ExecutionPolicy -ExecutionPolicy Bypass -Scope LocalMachine
This command sets the execution policy to bypass for LocalMachine.
Microsoft documentation :
https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_execution_policies?view=powershell-7.1
Related
I have tried the run spring boot jar file using putty. but the problem is after closed the putty session service was stopped.
then i tried up the jar file with following command. its working fine .
**nohup java -jar /web/server.jar **
You should avoid using nohup as it will just disassociate your terminal and the process. Instead, use the following command to run your process as a service.
sudo ln -s /path/to/your-spring-boot-app.jar /etc/init.d/your-spring-boot-app
This command creates a symbolic link to your JAR file. Which then, you can run as a service using the command sudo service your-spring-boot-app start. This will write console log to /var/log/your-spring-boot-app.log
Moreover, you can configure spring-boot/application.properties to write console logs at your specified location using logging.path=path-to-your-log-directoryor logging.file=path-to-your-log-file.txt. Also, it may be worth noting that logging.file takes priority over logging.path
I followed the Software Collections Quick Start and I now have Python 3.5 installed. How can I make it always enabled in my ~/.bashrc, so that I do not have to enable it manually with scl enable rh-python35 bash?
Use the scl_source feature.
Create a new file in /etc/profile.d/ to enable your collection automatically on start up:
$ cat /etc/profile.d/enablepython35.sh
#!/bin/bash
source scl_source enable python35
See How can I make a Red Hat Software Collection persist after a reboot/logout? for background and details.
This answer would be helpful to those who have limited auth access on the server.
I had a similar problem for python3.5 in HostGator's shared hosting. Python3.5 had to be enabled every single damn time after login. Here are my 10 steps for the resolution:
Enable the python through scl script python_enable_3.5 or scl enable rh-python35 bash.
Verify that it's enabled by executing python3.5 --version. This should give you your python version.
Execute which python3.5 to get its path. In my case, it was /opt/rh/rh-python35/root/usr/bin/python3.5. You can use this path to get the version again (just to verify that this path is working for you.)
Awesome, now please exit out of the current shell of scl.
Now, lets get the version again through this complete python3.5 path /opt/rh/rh-python35/root/usr/bin/python3.5 --version.
It won't give you the version but an error. In my case, it was
/opt/rh/rh-python35/root/usr/bin/python3.5: error while loading shared libraries: libpython3.5m.so.rh-python35-1.0: cannot open shared object file: No such file or directory
As mentioned in Tamas' answer, we gotta find that so file. locate doesn't work in shared hosting and you can't install that too.
Use the following command to find where that file is located:
find /opt/rh/rh-python35 -name "libpython3.5m.so.rh-python35-1.0"
Above command would print the complete path (second line) of the file once located. In my case, output was
find: `/opt/rh/rh-python35/root/root': Permission denied
/opt/rh/rh-python35/root/usr/lib64/libpython3.5m.so.rh-python35-1.0
Here is the complete command for the python3.5 to work in such shared hosting which would give the version,
LD_LIBRARY_PATH=/opt/rh/rh-python35/root/usr/lib64 /opt/rh/rh-python35/root/usr/bin/python3.5 --version
Finally, for shorthand, append the following alias in your ~/.bashrc
alias python351='LD_LIBRARY_PATH=/opt/rh/rh-python35/root/usr/lib64 /opt/rh/rh-python35/root/usr/bin/python3.5'
For verification, reload the .bashrc by source ~/.bashrc and execute python351 --version.
Well, there you go, now whenever you login again, you have got python351 to welcome you.
This is not just limited to python3.5, but can be helpful in case of other scl installed softwares.
Environment : Hortonworks Sandbox HDP 2.2.4
Issue : Unable to run the hadoop commands present in the shell scripts as a root user. The oozie job is getting triggered as a root user, but when the hadoop fs or any mapreduce command is executed, then it runs as yarn user. As yarn, doesn’t have access to some of the file system , so the shell script is failing to execute. Let me know what changes I need to do , for making it run the hadoop commands as root user.
It is an expected behaviour to get Yarn in place whenever we are invoking shell actions in oozie. Yarn user only have the capabilities to run shell actions. One thing we can do is to give access permissions to Yarn on the file system.
This is more like a shell script question than an Oozie question. In theory, Oozie job runs as the user who submits the job. In a kerberos' env, the user is whoever signed in with keytab/password.
Once job is running on Hadoop cluster, in order to change the ownership of command, you can use "sudo" within your shell script. In your case, you may also want to make sure user "yarn" is allowed to sudo to the commands you want to execute.
Add below property to workflow:
HADOOP_USER_NAME=${wf:user()}
I have a Google Compute Engine VM instance with a Asterisk Server running on it. I get this message when I try to run sudo:
sudo: parse error in /etc/sudoers near line 21
sudo: no valid sudoers sources found, quitting
sudo: unable to initialize policy plugin
Is there a password for root so I can try to change it there? Any suggestions on this?
It looks like you have manually edited the /etc/sudoers file so while you would normally have sudo access, due to the parse error, you won't be able to do this directly.
Here's how to fix this situation.
1. Save the current boot disk
go to to the instance view in Developers Console
find your VM instance and click on its name; you should now be looking at a URL such as
https://console.cloud.google.com/project/[PROJECT]/compute/instancesDetail/zones/[ZONE]/instances/[VM-NAME]
stop the instance
detach the boot disk from the instance
2. Fix the /etc/sudoers on the boot disk
create a new VM instance with its own boot disk; you should have sudo access here
attach the disk saved above as a separate persistent disk
mount the disk you just attached
fix the /etc/sudoers file on the disk
unmount the second disk
detach the second disk from the VM
delete the new VM instance (let it delete its boot disk, you won't need it)
3. Restore the original VM instance
re-attach the boot disk to the original VM
restart the original VM with its original boot disk, with fixed config
How to avoid this in the future
Always use the command visudo rather just any text editor directly to edit the /etc/sudoers file which will validate the contents of the file prior to saving it.
I ran into this issue as well and had the same issue Nakilon was reporting when trying the gcloud workaround.
What we ended up doing was configure a startup script that removed the broken sudoers file.
So in your metadata put something like:
#/bin/sh
rm "/etc/sudoers.d/broken-config-file"
echo "ok" > /tmp/ok.log
https://cloud.google.com/compute/docs/startupscript
As you probably figured out this requires the /etc/sudoers file to be fixed. As nobody has root access to the instance, you will not be able to do this from inside the instance.
The best way to solve this is to edit the disk from another instance. The basic steps to do this are:
Take a snapshot of your disk as a backup (!)
Shutdown your instance, taking care not to delete the boot disk.
Start a new "debugger" instance from one of the stock GCE images.
Attach the old boot disk to the new instance.
In the debugger instance, mount the disk.
In the debugger instance, fix the sudoers file on the mounted disk.
In the debugger instance, unmount the disk
Shutdown the debugger instance.
Create a new instance with the same specs as your original instance using the fixed disk as the boot disk.
The new disk will then have the fixed sudoers file.
Since i bumped into this issue too, if you have another instance or any place where you can run with gcloud privileges, you can run:
gcloud compute --project "<project id>" ssh --zone "europe-west1-b" "<servername>"
I ran this on a server which had gcloud as root, so you login to the other box as root too! Then fix your issue. (if you don't have a box, just spin a micro up with the correct gcloud privileges) saves the hassle of disk stuff etc.
As mentioned in above comments, I am getting the same error like below in gcp VM.
sudo: parse error in /etc/sudoers near line 21
sudo: no valid sudoers sources found, quitting
sudo: unable to initialize policy plugin
To solve this
I have ssh to another vm and become root then I ran gcloud ssh command to our main vm (where you are getting the sudo error.)
gcloud compute --project "<project id>" ssh --zone "europe-west1-b "<servername>"
And BOOM!, now are login as root in the VM.
Now you can access/change the /etc/sudoers file accordingly.
I found this hack better than recreating vm/disks.
Hope this helps to someone!
It is possible to connect to a VM as root from your developers console Google Cloud Shell. Make sure the VM is running, start the shell and use this command:
gcloud compute ssh root#<instance-name> --zone <zone> [--project <project-id>]
where instance-name is found in the Compute Engine VM Instances screen. project-id is optional but required if you are connecting to an instance in a different project from the project where you started the shell.
You can then fix this and other issues that may prevent you from using sudo.
I got a Permission denied error when trying to ssh to the problem instance via gcloud. Using a startup script as mentioned above by #Jorick works. Instructions for it are here. You will have to stop and restart the VM instance for the startup script to get executed. I modified the script slightly:
rm -f /etc/sudoers.d/google_sudoers >& /tmp/startup.log
After the restart, launch an SSH session from the cloud console and check that you are able to view the file contents (with sudo more /etc/sudoers.d/google_sudoers for example). If that works your problem has been solved.
I have source code of a compiler which I am building like this:
/path/to/srcdir/configure --prefix=/path/to/installdir
make
make install
I want to distribute the resulting 'installdir' to other machines, with the intent that anybody could use the compiler binaries without going through the 3-stage build process (I am just including the installdir in my distribution tarball).
For testing, I am copying the installdir to another machine under a different user, and then just trying to compile a test program using the binaries I just copied over, like this:
installdir/bin/ucc -mp -o test load_bl.c
Then, I get an error as follows:
cc1: error: /home/sghosh/normalbuild/installdir/open64-gcc-4.2.0/include: Permission denied
cc1: error: /home/sghosh/normalbuild/installdir/open64-gcc-4.2.0/lib/gcc/x86_64-redhat-linux/4.2.0/include: Permission denied
cc1: error: /home/sghosh/normalbuild/installdir/open64-gcc-4.2.0/x86_64-redhat-linux/include: Permission denied
The /home/sghosh/normalbuild/install is what is specified as --prefix during configure on my build machine. The installdir/bin/ucc binary require some files in the open64-gcc-4.2.0 dir under installdir, but since that is the path mentioned in --prefix, so it's still looking for it there, and I want it to look into the same dir in the current machine. FYI, I do not have sudo/root privileges.
How do I come up with a binary distribution that would work in any machine (build once, use anywhere sorts), and not look into the initial --prefix path in this case?
I have had a similar question in superuser, but since lots of edit happened and I got no response to the new question, so I am writing it here.
Check this tool: https://github.com/pgbovine/CDE
CDE is a tool that automatically packages up the Code, Data, and Environment involved in running any set of Linux commands so that they can execute identically on another computer without any installation or configuration.