Firebase serve not working - firebase

I've been migrating to Firebase 3.0, and with the new changes, we have to use firebase serve on the CLI for local development, and I believe this defaults to port 5000. However, after going through the init process, running firebase serve doesn't do anything after "Starting Firebase development server..." even with specifying port 5000. Attempted fixes:
Tried with other ports, like 5001
Reinstalled Node (4.x and 6.x)
Reinstalled NPM
Removed firebase-cli (since firebase-tools is now being used)
Reinstalled firebase-tools with npm
Tweaked firebase init endlessly
Tried on different user accounts on my computer
Restarted computer
Checked that port 5000 was free by $lsof -i tcp:5000
Tested address variants like localhost:5000 and like 127.x and 192.x
Here is the debug log:
[debug] ----------------------------------------------------------------------
[debug] Command: /usr/local/bin/node /usr/local/bin/firebase serve -p 5000 --debug
[debug] CLI Version: 3.0.0
[debug] Platform: darwin
[debug] Node Version: v6.2.0
[debug] Time: Sun May 22 2016 01:29:59 GMT+0200 (CEST)
[debug] ----------------------------------------------------------------------
[debug]
[info] Starting Firebase development server...
[info]
[info] Project Directory: /Users/user/Documents/localdev/spfwork
Any thoughts on how to fix?
Thank you for your help.

Fixed - firebase serve from firebase-tools (npm) was missing a logger for some errors, which I added on a pull request here: https://github.com/firebase/firebase-tools/pull/143
My error was that localhost was not starting for some reason, so I changed the command to firebase serve -p 5000 -o 127.0.0.1, and specifying the listen port allowed the server to start successfully.
For reference, the error was Error: getaddrinfo ENOTFOUND localhost

You could just change your /etc/hosts file and use firebase serve normally.
To do this:
Launch Terminal
Type sudo nano /etc/hosts and press Return
Enter your admin password Paste
Paste
##
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
Save the file by pressing Ctrl + O
Exit with Ctrl + X
This should fix it.
See this for more

If Firebase cannot find the 'Public' folder, this error might show up. In that case, the error can be resolved by putting index.html and other static files and app assets of the website within the public folder, and executing firebase deploy at the Firebase CLI again.

Related

Artifactory service fails to start upon Fedora 35 reboot

I have installed on Fedora 35 jfrog-artifactory-oss (v7.31.11-73111900.x86_64) and enabled it as a system service to start at boot. But whenever I boot up my OS, the server never starts properly. I will always need to kill the PID of the active running Artifactory process. If I then do sudo service artifactory restart it will bring up the server cleanly and everything is good. How can I avoid having to do this little dance? Is there something about OS boot up that is causing Artifactory to get thrown off?
I have looked at console.log when the server is not running properly after bootup, I see some logs like:
2022-01-27T08:35:38.383Z [shell] [INFO] [] [artifactoryManage.sh:69] [main] - Artifactory Tomcat already started
2022-01-27T08:35:43.084Z [jfac] [WARN] [d84d2d549b318495] [o.j.c.ExecutionUtils:165] [pool-9-thread-2] - Retry 900 Elapsed 7.56 minutes failed: Registration with router on URL http://localhost:8046 failed with error: UNAVAILABLE: io exception. Trying again
That shows that the server is not running properly, but doesn't give a clear idea of what to try next. Any suggestions?
2 things to check,
How is the artifactory.service file in the systemd directory
Whenever the OS is rebooted, what is the error seen in the logs, check all the logs.
Hint: From the warning shared, it seems that Router service is not able to start when OS is rebooted, so whenever OS is rebooted and issue comes up check the router-service.log for any errors/warnings.

Unable to connect to firebase emulator suite with exec

Started firebase project emulators (which uses cloud functions and firestore) with below command
firebase emulators:start
It runs successfully and gives me a path to connect to the functions and shows a local host url for firestore too.
Then, to execute my jest tests, ran the below command
firebase emulators:exec --only firestore jest
As per the documentation, to connect to local firstore, we need to use exec. But its throwing below error.
i emulators: Starting emulators: firestore
⚠ emulators: emulator hub unable to start on port 4400, starting on 4401
✔ hub: emulator hub started at http://localhost:4401
i Shutting down emulators.
i Stoppping emulator hub
⚠ Port 8080 is not open on localhost, could not start firestore emulator.
i To select a different host/port for the emulator, update your "firebase.json":
{
// ...
"emulators": {
"firestore": {
"host": "HOST",
"port": "PORT"
}
}
}
i Shutting down emulators.
Error: Could not start firestore emulator, port taken.
This error is thrown everytime when I run exec command. Can someone point out what could be wrong?
Edit: Logs from firebase emulators:start
firebase emulators:start
i emulators: Starting emulators: functions, firestore, hosting, pubsub
✔ hub: emulator hub started at http://localhost:4400
⚠ Your requested "node" version "8" doesn't match your global version "10"
✔ functions: functions emulator started at http://localhost:5001
i firestore: Serving ALL traffic (including WebChannel) on http://localhost:8080
⚠ firestore: Support for WebChannel on a separate port (8081) is DEPRECATED and will go away soon. Please use port above instead.
i firestore: firestore emulator logging to firestore-debug.log
✔ firestore: firestore emulator started at http://localhost:8080
i firestore: For testing set FIRESTORE_EMULATOR_HOST=localhost:8080
i hosting[website]: Serving hosting files from: public
✔ hosting[website]: Local server: http://localhost:5000
i hosting[admin]: Serving hosting files from: public
✔ hosting[admin]: Local server: http://localhost:5005
i hosting[b2b]: Serving hosting files from: public
✔ hosting[b2b]: Local server: http://localhost:5006
i hosting[b2c]: Serving hosting files from: public
✔ hosting[b2c]: Local server: http://localhost:5007
i hosting[sdk]: Serving hosting files from: public
✔ hosting[sdk]: Local server: http://localhost:5008
✔ hosting: hosting emulator started at http://localhost:5000
i pubsub: pubsub emulator logging to pubsub-debug.log
✔ pubsub: pubsub emulator started at http://localhost:8085
Update
With a fresh start also the mentioned error is shown. But killing the port made it work.
Added below in scipts part of package.json
"kill": "npx kill-port 5000 5001 8080 8085 4000 9229"
and run
npm run kill
Here all above listed ports aren't reqired. Just 8080 might work in your case but I have other ports too being used by the emulator.
lsof command is not availalble on windows powershell.
A better cross-platform solution is: npx kill-port 8080
Type this in your terminal, where 8080 is your port number:
lsof -ti tcp:8080 | xargs kill
Reason: Failed quitting the previous firebase emulator.
There is an issue filed for firepit (the standalone CLI installable with curl): #1868
Installing it as Node package may help: npm install -g firebase-tools
If this should already be the case, you probably should file an issue there.
I closed the terminal, then opened it again, launched command again and it works

Airflow live executor logs with DaskExecutor

I have an Airflow installation (on Kubernetes). My setup uses DaskExecutor. I also configured remote logging to S3. However when the task is running I cannot see the log, and I get this error instead:
*** Log file does not exist: /airflow/logs/dbt/run_dbt/2018-11-01T06:00:00+00:00/3.log
*** Fetching from: http://airflow-worker-74d75ccd98-6g9h5:8793/log/dbt/run_dbt/2018-11-01T06:00:00+00:00/3.log
*** Failed to fetch log file from worker. HTTPConnectionPool(host='airflow-worker-74d75ccd98-6g9h5', port=8793): Max retries exceeded with url: /log/dbt/run_dbt/2018-11-01T06:00:00+00:00/3.log (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f7d0668ae80>: Failed to establish a new connection: [Errno -2] Name or service not known',))
Once the task is done, the log is shown correctly.
I believe what Airflow is doing is:
for finished tasks read logs from s3
for running tasks, connect to executor's log server endpoint and show that.
Looks like Airflow is using celery.worker_log_server_port to connect to my dask executor to fetch logs from there.
How to configure DaskExecutor to expose log server endpoint?
my configuration:
core remote_logging True
core remote_base_log_folder s3://some-s3-path
core executor DaskExecutor
dask cluster_address 127.0.0.1:8786
celery worker_log_server_port 8793
what i verified:
- verified that the log file exists and is being written to on the executor while the task is running
- called netstat -tunlp on executor container, but did not find any extra port exposed, where logs could be served from.
UPDATE
have a look at serve_logs airflow cli command - I believe it does exactly the same.
We solved the problem by simply starting a python HTTP handler on a worker.
Dockerfile:
RUN mkdir -p $AIRFLOW_HOME/serve
RUN ln -s $AIRFLOW_HOME/logs $AIRFLOW_HOME/serve/log
worker.sh (run by Docker CMD):
#!/usr/bin/env bash
cd $AIRFLOW_HOME/serve
python3 -m http.server 8793 &
cd -
dask-worker $#

Homebrew Nginx not running but says it is in brew services

I have OSX El Capitan. I installed Nginx-Full via homebrew. I am supposed to be able to start and stop services with
brew services Nginx-Full Start
I run that command and it seems to start no problem. I check the running services with
brew services list
That indicates that the Nginx-Full services is running. When i run
htop
to look at everything that is running Nginx does not show up and the server is not handling requests.
nginx is failing to launch because of an error, but brew-services is not communicating that to you.
Running it with sudo, as other users have suggested, is just masking the problem. If you just run nginx directly, you may see that there is actually a configuration or permissions issue that is causing nginx to abort. In my case, it was because it couldn't write to the error log:
nginx: [alert] could not open error log file: open() "/usr/local/var/log/nginx/error.log" failed (13: Permission denied)
2020/04/02 13:11:53 [warn] 19989#0: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/etc/nginx/nginx.conf:2
2020/04/02 13:11:53 [emerg] 19989#0: open() "/usr/local/var/log/nginx/error.log" failed (13: Permission denied)
The last error is causing nginx to fail to launch. You can make yourself the owner of the logs with:
sudo chown -R $(whoami) /usr/local/var/log/nginx/
This should cause subsequent config errors to be written to the error log, even if homebrew services is not reporting them in stderr/stdout for now.
I've opened an issue about this: https://github.com/Homebrew/homebrew-services/issues/215
The log path may not the same for everyone. You can check the path to log file by checking the config file /usr/local/etc/nginx/nginx.conf. You can find a line like:
error_log /Users/myusername/somepath/nginx.log;. Change the chown command above accordingly. If even this didn't solve the problem, you may have to do the same for any other log files specified in the server blocks in your nginx configuration
Try launching it with "sudo", even if the formula say
The default port has been set in /usr/local/etc/nginx/nginx.conf to 8080 so that
nginx can run without sudo.
sudo brew services Nginx-Full start
this worked for me:
sudo brew services start nginx
Running sudo nginx worked for me, it initially gave some error stating certain file in certain directory is missing, creating that file, and then another file is asked for to be created and then it runs properly.
I had similar problem, running it brew services start nginx used to show nginx running.
but brew services list used to show error.
running with sudo nginx solved my issue

django-nginx-gunicorn on Amazon ec2 is not working?

I have tried this tutorial http://adrian.org.ar/python/django-nginx-green-unicorn-in-an-ubuntu-11-10-ec2-instance to setup django with ngnix and gunicorn.I have installed django and gunicorn in virtualenv environment.Everthing has got installed perfectly and even every command works.When i try this command gunicorn_django -b 0.0.0.0:8000 after going into my django app folder it starts gunicorn and shows following on shell:
2012-05-22 13:22:38 [3146] [INFO] Starting gunicorn 0.14.3
2012-05-22 13:22:38 [3146] [INFO] Listening at: http://0.0.0.0:8000 (3146)
2012-05-22 13:22:38 [3146] [INFO] Using worker: sync
2012-05-22 13:22:38 [3149] [INFO] Booting worker with pid: 3149
But if i goto my amazon DNS http://ec2-XX-XX-XXX-XXX.compute-1.amazonaws.com:8000/ through browser i got nothing and broswer simply shows " could not found/connect" message.But if I goto http://ec2-XX-XX-XXX-XXX.compute-1.amazonaws.com through browser it shows Ngnix welcome page.I dont know why is gunicorn not returning me Django welcome page when i Goto http://ec2-XX-XX-XXX-XXX.compute-1.amazonaws.com:8000.I cant even see any GET request log in shell coming to gunicorn worker.I have trying changing port with gunicorn_django command but had no luck and got same result gunicorn doesnot serve any page.
Please note I am using django-nonrel.
it looks like you need to configure the security group that control the access to your EC2 instance.
You should end with something like this:
https://skitch.com/ikis/8h1tc/aws-management-console
In my example, 22, 80 and 8000 are opened to world.

Resources