Postgresql 10.4 with pgadmin 4 v3.0 issue - Query Tools Initialized Error for Windows 10 x64 User - initialization

I had been running pgadmin4 v2.0 with no issues.
This is happen when i upgraded my postgresql to 10.4
I have encountered a problem where pgadmin 4 v3.0 cannot initialize Query tools
I have uninstall and reinstall the postgresql multiple times but it didn't solve this problem yet.
i also have downgraded the pgadmin4 to v2.0, unfortunately that the pgadmin4 v2.0 now cannot initialized anymore.
I don't know what cause this problem.
I have searched the solution through internet but only find the solution for Ubuntu User
Has anyone ever encounter this problem also for windows 10 x64 user?
Please help...

I get the same message when clicking "Query Tool" menu.
If you keep trying this action over and over you may eventually get to the query tool. Sometimes (rarely) it works. You will spawn may processes though in trying this approach. Check the dashboard to see/kill the processes spawned on each launch attempt of the query tool. I have found no other way to get to the query tool. I also cannot revert back to pgAdmin4 version 2. There is no info in the browser dev tools console and I see no info in the pgAdmin4 logs when I click the query tool menu. As stated in another forum using the File/Reset Layout menu may work to fix this issue. I have had limited success with this though.
I am using: pgAdmin4 version3.
Windows 10 Pro.
PostgreSQL 9.3, 9.6, and 10.
FireFox 60.0.1, Microsoft Edge 41.16299.402.0, and Google Chrome (latest version).

As the answer from pgAdmin 4 v3.0 Query Tool Initialize Error suggests, you need to edit the config.py file from your Windows location. Mine looks like this "C:\Program Files (x86)\pgAdmin 4\v3\web" and there you have a line with:
DEFAULT_SERVER = '127.0.0.1'
it should be changed to
DEFAULT_SERVER = 'localhost'
Than don't forget to restart the pgAdmin from the tray.
I think it is related to the fact that your postgres database when installed was installed as a 'localhost' instead of '127.0.0.1'.
Hope this helps
UPDATE: I also found out that this solution doesn't work when you're using a firewall.

Thank you for your suggestion user3613316
I have followed your instruction, it is indeed that i have to spawn the Query Tool over and over in order to enter the Query Tool menu.
It sometimes take over 15 or 20 times of this action in order to enter "Query Tool" menu.
I am using keyboard shortcut Alt+Shift+Q in order to proceed faster.

I found to get this working there are several things which need to be checked:
web server logs
postgresql logs
browser console outputs
If your web server (unless you are accessing pgadmin4 port directly) is proxying the correct port with all correct options. I use nginx and this is working fine with:
server
{
server_name db.serv1;
listen 443 ssl http2;
ssl_certificate /etc/letsencrypt/live/myserver.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myserver.com/privkey.pem;
ssl_trusted_certificate /etc/letsencrypt/live/myserver.com/fullchain.pem;
location /
{
proxy_pass http://127.0.0.1:5050;
proxy_http_version 1.1;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
Make sure that you X-Frame-Options are not set to "DENY". This is sometimes set on browsers as a click jacking prevention. Otherwise it will block the frame in which the Query Tools window opens in. You can check this with your web tools "inspection" or open up your console output for your browser.
The error message I got was somewhere along the lines of "X-Frame-Options set to 'DENY'" and the browser page displayed "FAILED TO CONNECT".
If you have differences in "localhost" versus "127.0.0.1" or similar types of differences between your permissions settings and connection options, this can create a conflict. Ensure that you server connection parameters you specify in pgadmin4 fit the connection settings in pga_hba.conf (localhost is not always the same as 127.0.0.1). They should be exact and that will eliminate the variable of permissions versus connection type.
I found that sometimes specifying localhost versus 127.0.0.1, that localhost in some instances connects through the Unix-domain socket connection rather than tcp.

Had similar hanging problems with the latest pgAdmin 4-4.13; tried all the suggestions but it didn't work. Rolled back to 4-4.6, and everything went back to normal. Wasted some time!

Related

Nginx proxy is not loading static files

I've deployed this app (polynote https://github.com/polynote) on a remote machine (listening on port 8192) and now I want the UI to be accessible through browser. It all works fine using ssh port-forwarding. Now I want to setup some rules using nginx (which I already use to proxy things like jupyter) to make this automatic. This is the basic rule I used
location /polynote {
proxy_pass http://localhost:8192;
proxy_pass_request_headers on;
}
When trying to connect from the browser to `remote-machine:/polynote' I see connections arriving in the server log
[blaze-selector-1] INFO org.http4s.blaze.channel.nio1.NIO1SocketServerGroup - Accepted connection from /127.0.0.1:46673
but the browser page remains blank and no errors is logged anywhere.
Inspecting the network using Chrome dev tools I see these data arriving in the Network tab
while a normal working installation (tried in my machine) reports this Network traffic
Do you have any idea why I am loosing static files?

Configuring local laptop to a domain IP

I want to configure my laptop with Ubuntu or Windows to connect to my website.
I want to run a web server directly from my laptop, which can be accessed from my website i.e. www.xyz.com/laptop_projects
I have a static IP, a .com address and a laptop running Ubuntu and windows with me.
Can anyone, let me know, how to configure so that anyone accessing my website can directly access projects running on my laptop.
Hope I am clear, let me know, if anything is unclear.
I have a macbook pro, if someone has a solution for mac only.
What probably you are looking for is for a reverse proxy you could use Nginx for this, for example, if you would like to show the contents of your laptop within this URL www.example.com/projects on your server hosting www.example.com you could have something like this:
location /projects {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://your-laptop:8000;
}
Now you may want to use a dynamic DNS for your laptop so that you don't have to frequently modify your Nginx configuration.
For testing purposes and just serving static files, you could is this tiny web server: https://go-www.com/
More nginx examples: https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/

Running shiny apps in RStudio when proxying with nginx

I am trying to use RStudio server proxied through nginx to develop a shiny app. When I use RStudio (not server) to run the shiny app, things work just fine. However, when I run things through the proxied RStudio, the app appears in the "viewer" pane but the app is not functional. My console reports that the app is running on localhost:3691. Do I need to proxy that port, also, or is the websocket system that runs shiny not going to work through the nginx proxy?
There is a how-to documentation on the Rstudio website that details how to use either Nginx or Apache reverse proxies.
https://support.rstudio.com/hc/en-us/articles/200552326-Running-RStudio-Server-with-a-Proxy
In particular it mentions "you will also need to include code to correctly proxy websockets in order to correctly proxy Shiny apps and R Markdown documents within RStudio Server."
To make Shiny Apps work in RStudio Server Pro websocket support in NGINX is needed
. See also the RStudio documentation article about this (which unfortunately does not specifically list which settings relate to the websocket support.
The important bits are:
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
At the top of the NGINX config file.
And
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
In the location {} block of RStudio Server Pro.

Security in an R Shiny Application

I want to publish an R Shiny web application (http://www.rstudio.com/shiny/) on the web, but I want to password protect it so that only people with credentials can view what I have published. What is the best way to do this ?
This might be a little late but I am going to answer anyways. If you have already got a solution, can you please share it with us?
My primary aim was very simple. I had a working version of shiny app on my laptop. I used to run it as mentioned below, all the while for testing locally.
R -e "shiny::runApp('.')"
Then came the moment when we had to put this on out on an Amazon EC2 instance.
At first my attempt was to directly proxy apache to port 8100 on which my app would listen to. But that didn't work that well as it appears that running the server in this manner actually uses raw sockets where as using a shiny-server falls back to using sock.js hence the communication is now over HTTP instead.
So downloaded the shiny-server app on our EC2 instance by following the instructions here: https://github.com/rstudio/shiny-server
Btw, though the instructions there recommend you install node.js from source if you are using a RHEL instance, it worked pretty well for me by going the yum install way. You have those instructions here.
Following the node.js, shiny-server installation & setup, edited my Apache conf (Ubuntu and RHEL call the conf differently. Hence edit the one you have). Added a virtual host to service my requests. And as you can notice, also masked it with a Apache Basic digest Auth with the Apache Location directive.
<VirtualHost *:80>
ProxyPass / http://localhost:3838/
ProxyPassReverse / http://localhost:3838/
ProxyPreserveHost On
<Location />
AuthType Basic
AuthName "Restricted Access - Authenticate"
AuthUserFile /etc/httpd/htpasswd.users
Require valid-user
</Location>
</VirtualHost>
Apart from this also edited the shiny-server conf to listen to request from 127.0.0.1 only (localhost only). So in my shiny-server I have the following:
listen 3838 127.0.0.1;
Btw, you wouldn't need this if you are in an Amazon EC2 environment as you can use the security-group setting in your EC2 dashboard to do the same. I did this anyway as a good measure.
This, for now, is enough as we were looking for something very quick & simple.
Now desperately waiting for the awesome RShiny folks to provide auth as a part of the enterprise edition deal.
Hope this helps.
This may be a little late for the OP but it may be useful for your use case:
https://auth0.com/blog/2015/09/24/adding-authentication-to-shiny-open-source-edition/
It's similar to the Rohith's answer, but it uses Auth0 instead, which allows you more authentication options (Like connecting, Google Accounts, Active directory, LDAP, and a big etc)
Disclaimer: I work at Auth0, we are using Shiny internally with this configuration and it works fine.
A little bit later (2016), but I've found another option using ngnix as a proxy:
This guide has been finished by following partially this guideline: https://support.rstudio.com/hc/en-us/articles/213733868-Running-Shiny-Server-with-a-Proxy
On an Ubuntu 14.04:
Install nginx
Change file config /etc/nginx/nginx.conf to:
This:
events {
worker_connections 768;
multi_accept on;
}
http {
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen XX;
location / {
proxy_pass http://localhost:YY;
proxy_redirect http://localhost:YY/ $scheme://$host/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_read_timeout 20d;
auth_basic "Restricted Content";
auth_basic_user_file /etc/nginx/.htpasswd;
}
}
}
XX: Port that the nginx will listen to
YY: Port that the shiny server uses
Using this tutorial, I added password authentication to the nginx server: https://www.digitalocean.com/community/tutorials/how-to-set-up-password-authentication-with-nginx-on-ubuntu-14-04
Set up the shiny process or the shiny server to only listen to localhost (127.0.0.1)
This could be viewed as an HTTP requirement rather than a Shiny feature. If so, you could look into first implementing an HTTP authentication, and once the credentials are verified, you can redirect to your Shiny app URL.
Here's a blog post that explains setting up simple authentication for Apache Tomcat.
Also, take a look at this article for setting it up in IIS
Searching SO or the Web for basic authentication should get you a few useful links and get you closer.
At this time there isn't a straight forward way to do this. However we will be releasing a commercial version of Shiny Server in the near future. We'll be doing a beta in the upcoming month or so and the official release before the end of the year. This will include the ability to have password authentication for your Shiny Apps. In addition, Shiny Server Pro will have features around security, authentication, scalability, server monitoring, and premium support.
Another place that you might be able to get some feedback is the Shiny Mailing List. There are a lot of active users who might have some ideas. Otherwise, if you'd like to contact us directly about this, you can email info#rstudio.com and I'll respond.
Best,
Josh
Product Manager - RStudio
Its too late for the answer but I think, there is development around the same. You can use google auth to login in the shiny web apps. There is a solution on different thread, you can refer that : ShinyApp Google Login
We have implemented the solution with multi-level authentication:
Using query string based values for application to load
Time based Encrypted token to expire at certain point in future
User name (used in next auth step)
Modal dialog to validate access code
Validated against AES encrypted value of user name
Reference: CRAN - digest -> AES used CBC mode for both levels with Encryption Key (32 Byte raw vector) and IV of long length

Using syslog with nginx / php-fpm

I'm looking into migrating my Apache based Drupal installs, but need to have rsyslog based remote error logging. We're using Amazon EC2, and error_log files written to instances that come and go is a bit of nightmare. In particular, if an application hits a PHP error, we need to see those as soon as they happen.
Assuming we're using nginx and php-fpm, if I put the following into an fpm pool definition, it works:
error_log = /path/to/logs/error.log
since if I do this in PHP, it will indeed go to /path/to/logs/error.log:
<?php
error_log('send this to our log');
?>
But if I set up the pool as
error_log = syslog
as near as I can tell, the output goes no where at all. At least, I have yet to figure out a configuration for rsyslog.conf on Ubuntu that will receive any input for this.
What's the best way to get centralized error logging when using nginx and php-fpm? I'm using Ubuntu 11.10, with the packaged versions of php5-fpm and nginx.
nginx is unrelated here. I would recommend the syslog module for Drupal. Put that on and then everything goes to syslog which would go to watchdog, that catches PHP errors too.
PHP has a native syslog() function as well.
If you are using syslog-ng I think you can route the normal php log into syslog, but I've never heard of being able to do error_log = syslog.

Resources