why chrome remote webapp build process stuck? - ninja

I want to build chrome remote desktop web app using chrome remote desktop codes. I have fetch from repository now want to build the chrome remote webapp. I tried the following command from 4 hour but it stuck and have no resposnce. After 4 hour it print the logs like
ninja -C out/Release/ remoting_webapp
log: ninja version 0.1.3 initializing
log: magic group: gid=0 (root)
log: entering main Loop
log: generating initial pid array..
log: now monitoring process activity
log: NEW ROOT PROCESS: avahi-autoipd[14897] ppid=14896 uid=0 gid=0
log: - ppid uid=105(avahi-autoipd) gid=113 ppid=1
log: + avahi-autoipd is in magic group, all OK!
log: NEW ROOT PROCESS: avahi-autoipd[14944] ppid=14943 uid=0 gid=0
log: - ppid uid=105(avahi-autoipd) gid=113 ppid=1
log: + avahi-autoipd is in magic group, all OK!
Here its stuck and not proceed further. How much time it will take? Is there any error?
Is there is any one can help me?

Related

Artifactory service fails to start upon Fedora 35 reboot

I have installed on Fedora 35 jfrog-artifactory-oss (v7.31.11-73111900.x86_64) and enabled it as a system service to start at boot. But whenever I boot up my OS, the server never starts properly. I will always need to kill the PID of the active running Artifactory process. If I then do sudo service artifactory restart it will bring up the server cleanly and everything is good. How can I avoid having to do this little dance? Is there something about OS boot up that is causing Artifactory to get thrown off?
I have looked at console.log when the server is not running properly after bootup, I see some logs like:
2022-01-27T08:35:38.383Z [shell] [INFO] [] [artifactoryManage.sh:69] [main] - Artifactory Tomcat already started
2022-01-27T08:35:43.084Z [jfac] [WARN] [d84d2d549b318495] [o.j.c.ExecutionUtils:165] [pool-9-thread-2] - Retry 900 Elapsed 7.56 minutes failed: Registration with router on URL http://localhost:8046 failed with error: UNAVAILABLE: io exception. Trying again
That shows that the server is not running properly, but doesn't give a clear idea of what to try next. Any suggestions?
2 things to check,
How is the artifactory.service file in the systemd directory
Whenever the OS is rebooted, what is the error seen in the logs, check all the logs.
Hint: From the warning shared, it seems that Router service is not able to start when OS is rebooted, so whenever OS is rebooted and issue comes up check the router-service.log for any errors/warnings.

Run nginx.exe on Windows as another user

I'm working on a project for deploying pentest lab with terraform & ansible. All is working good except that last problem.
In my lab I have a nginx server running on a Windows server. Nginx with php works when I start them as Administrator with ansible but i need them to run with a non admin local account.
For the php i've made a wrapper using this tools : https://github.com/antonioCoco/RunasCs
But it doesn't work with nginx cause of a working directory problem :
Here is the error :
PS C:\Users\Administrator> .\RunAsCs.exe nginx ***** C:\Web\nginx-1.19.6\nginx.exe
[*] Warning: GetUserProfileDirectory failed with error code: 2
[*] Warning: Unable to obtain environment for user 'nginx'.
[*] Warning: Environment of created process might be incorrect.
nginx: [alert] could not open error log file: CreateFile() "logs/error.log" failed (3: The system cannot find the path specified)
2021/03/06 10:18:33 [emerg] 5556#6124: CreateFile() "C:\Windows\system32/conf/nginx.conf" failed (3: The system cannot find the path specified)
And that's normal because as you can see my wrapper start in Windows/System32
I would like to know if there is a solution either with nginx.conf or with ansible to start this exe as the "nginx" user.
This is a working code for starting nginx as Administrator
- name: Starting web server
win_shell: .\nginx.exe
args:
chdir: C:\Web\nginx-1.19.6
async: 180
poll: 0
I know that there is a psexec module in ansible but psexec will work only for Local Admin account and the goal of that is that my nginx don't run as Local Admin.
Thanks for the help !

Unable to run shiny server pro in evaluation period

I am trying to deploy shiny server pro on a ubuntu 19.04 machine. I opted for the 45 days evaluation period where in Rstudio provides unrestricted access to shiny server pro features for the said period. But after setting up the server, I found out in the logs that my evaluation period had expired just after one day of use.
For setting up the server I followed the following steps:
Entered my details in the evaluation period form in this link https://rstudio.com/products/shiny-server-pro/evaluation/
I received a link in my mail which took me to the page which contains the commands to set up the R shiny pro server in ubuntu. This can be found here https://rstudio.com/products/shiny/download-commercial/ubuntu/
After following the steps,and changing the run_as parameter to my username in shiny-server.conf file I was able to get the server running. But found this in the logs:
[2019-10-15T14:05:28.424] [INFO] shiny-server - Shiny Server Pro v1.5.12.1023 (Node.js v10.15.3)
[2019-10-15T14:05:28.426] [INFO] shiny-server - Using config file "/etc/shiny-server/shiny-server.conf"
[2019-10-15T14:05:28.427] [INFO] shiny-server - Using non-persistent random cookie secret
[2019-10-15T14:05:28.467] [INFO] shiny-server - No authentication system configured.
[2019-10-15T14:05:28.469] [INFO] shiny-server - Starting listener on http://[::]:3838
[2019-10-15T14:05:28.478] [INFO] shiny-server - License type: traditional
[2019-10-15T14:05:28.539] [INFO] shiny-server - Licensing check succeeded.
[2019-10-15T14:05:28.541] [ERROR] shiny-server - Your evaluation has ended. Please activate!
I tried the same steps on an EC2 instance, but ran into the same problem.
for anyone facing a similar issue kindly note that I was able to resolve the above error by raising a ticket to Rstudio who acknowledged the same and were kind enough to provide me with a new trial period key

Airflow: How to setup log directory?

I upload a dag file to the web page and when I click 'Graph View' -> ${my_dag} -> 'View Log', it shows:
*** Log file isn't local.
*** Fetching here: http://:8793/log/demo_dag/hello_task/2018-11-14T15:06:00
*** Failed to fetch log file from worker.
*** Reading remote logs...
*** Unsupported remote log location.
I have checked the airflow.cfg and find these config info:
worker_log_server_port = 8793
base_log_folder = /root/airflow/logs
My question is:
How to setup IP address for log service (Only port is setup)?
I have setup directory for log service, why does it still go to /log/.. ?
Any help is appreciated.
This can happen when the task status was manually changed (likely through the "Mark Success" option) and the task never receives a hostname value on the record.
The webserver is attempting to reach out to a server, with no name, to get logs for a task that never ran.
PS: Be careful running processes as the root user.
I've been getting this error, fix it by correcting the socket volume path:
WARNING - OSError while attempting to symlink the latest log directory
In windows the volume will go with a double bar like this:
volumes:
- //var/run/docker.sock:/var/run/docker.sock
Bind to docker socket on Windows
Setting up Airflow to run with Docker Swarm’s orchestration

django-nginx-gunicorn on Amazon ec2 is not working?

I have tried this tutorial http://adrian.org.ar/python/django-nginx-green-unicorn-in-an-ubuntu-11-10-ec2-instance to setup django with ngnix and gunicorn.I have installed django and gunicorn in virtualenv environment.Everthing has got installed perfectly and even every command works.When i try this command gunicorn_django -b 0.0.0.0:8000 after going into my django app folder it starts gunicorn and shows following on shell:
2012-05-22 13:22:38 [3146] [INFO] Starting gunicorn 0.14.3
2012-05-22 13:22:38 [3146] [INFO] Listening at: http://0.0.0.0:8000 (3146)
2012-05-22 13:22:38 [3146] [INFO] Using worker: sync
2012-05-22 13:22:38 [3149] [INFO] Booting worker with pid: 3149
But if i goto my amazon DNS http://ec2-XX-XX-XXX-XXX.compute-1.amazonaws.com:8000/ through browser i got nothing and broswer simply shows " could not found/connect" message.But if I goto http://ec2-XX-XX-XXX-XXX.compute-1.amazonaws.com through browser it shows Ngnix welcome page.I dont know why is gunicorn not returning me Django welcome page when i Goto http://ec2-XX-XX-XXX-XXX.compute-1.amazonaws.com:8000.I cant even see any GET request log in shell coming to gunicorn worker.I have trying changing port with gunicorn_django command but had no luck and got same result gunicorn doesnot serve any page.
Please note I am using django-nonrel.
it looks like you need to configure the security group that control the access to your EC2 instance.
You should end with something like this:
https://skitch.com/ikis/8h1tc/aws-management-console
In my example, 22, 80 and 8000 are opened to world.

Resources