Update to pyodbc 4 causes "Segmentation fault" - pyodbc

I have a Django+celery site running on Elastic Beanstalk that used pyodbc 3. Because of another issue, I had to update to pyodbc 4.
However, the web site starting giving error 500 and this is the information logged:
[Fri Feb 24 20:02:14.448536 2017] [core:notice] [pid 27978] AH00052: child pid 28292 exit signal Segmentation fault (11)
[Fri Feb 24 20:02:15.145503 2017] [core:error] [pid 27988] [client 205.165.34.225:50040] End of script output before headers: wsgi.py, referer: ...
During that time the Celery worker was still running and working just fine! I've re-deployed the whole server (Rebuild Environment in AWS) and it didn't fix the issue.
I had to revert to pyodbc 3 and it started working just fine. Any ideas?
django 1.10.4, pyodbc 4.0.11, django-pyodbc-azure 1.10.0.1

We have run into this issue as well, and will be providing a stack trace and related information to pyodbc on Monday. We've fixed it by pinning our requirements like this:
pyodbc==3.1.1
django-pyodbc-azure==1.10.4.0
While this doesn't get you pyodbc 4.0 yet, it will work for the rest of your site. We had some luck using 4.0.5 for running the site, but ran into some issues with migrations. (Note: if you use django-pyodbc-azure, you should use the highest version that matches your Django version, I.e. 1.10.4.0 for Django 1.10.)
Good luck!

Related

Artifactory service fails to start upon Fedora 35 reboot

I have installed on Fedora 35 jfrog-artifactory-oss (v7.31.11-73111900.x86_64) and enabled it as a system service to start at boot. But whenever I boot up my OS, the server never starts properly. I will always need to kill the PID of the active running Artifactory process. If I then do sudo service artifactory restart it will bring up the server cleanly and everything is good. How can I avoid having to do this little dance? Is there something about OS boot up that is causing Artifactory to get thrown off?
I have looked at console.log when the server is not running properly after bootup, I see some logs like:
2022-01-27T08:35:38.383Z [shell] [INFO] [] [artifactoryManage.sh:69] [main] - Artifactory Tomcat already started
2022-01-27T08:35:43.084Z [jfac] [WARN] [d84d2d549b318495] [o.j.c.ExecutionUtils:165] [pool-9-thread-2] - Retry 900 Elapsed 7.56 minutes failed: Registration with router on URL http://localhost:8046 failed with error: UNAVAILABLE: io exception. Trying again
That shows that the server is not running properly, but doesn't give a clear idea of what to try next. Any suggestions?
2 things to check,
How is the artifactory.service file in the systemd directory
Whenever the OS is rebooted, what is the error seen in the logs, check all the logs.
Hint: From the warning shared, it seems that Router service is not able to start when OS is rebooted, so whenever OS is rebooted and issue comes up check the router-service.log for any errors/warnings.

Docker on CentOS 7.2: kernel:unregister_netdevice: waiting for lo to become free. Usage count = 1

I'm running Docker on CentOS 7, from time to time there's the following message displayed:
Message from syslogd#dev-master at Mar 29 17:23:03 ...
kernel:unregister_netdevice: waiting for lo to become free. Usage count = 1
I've googled a lot, read a lot of resources found and tried many ways like keeping my system updated, upgrading kernel etc, but the message still keeps showing up, it's not too often but sooner or later I'll see it. Also I found issue for this problem on docker github is still open, then my questions are:
What does this message mean? Could somebody give me a simple explanation why docker causes it?
Is there any workaround for this?
If it could not be fixed yet(the issue is still open), will it affect the server or services running inside docker container? Will it be a serious performance issue because it also happens on our production servers?
Docker version:
Client:
Version: 1.11.1
API version: 1.23
Go version: go1.5.4
Git commit: 5604cbe
Built: Wed Apr 27 00:34:42 2016
OS/Arch: linux/amd64
Server:
Version: 1.11.1
API version: 1.23
Go version: go1.5.4
Git commit: 5604cbe
Built: Wed Apr 27 00:34:42 2016
OS/Arch: linux/amd64
OS info:
CentOS 7, with kernel version: 4.6.0-1.el7.elrepo.x86_64
I really appreciate for any info/tips or resources, thanks a lot.
Your best source of information is the issue you linked to docker#5618. This is a kernel bug, and has not yet been resolved. The issue is "triggered" by docker because starting/stopping containers also creates network interfaces for containers when they are created/destroyed.

Shiny server Connection closed. Info: {"type":"close","code":4503,"reason":"The application unexpectedly exited","wasClean":true}

I've encountered a problem with deploying my shiny app on linux Ubuntu 16.04 LTS.
After I run sudo systemctl start shiny-server, and open up my browser heading to http://192.168..*:3838/StockVis/, the web page greys out in a second.
I found some warnings in the web console as below, and survey some information on the web for like two weeks, but still have no solution. :(
***"Thu Feb 16 2017 12:20:49 GMT+0800 (CST) [INF]: Connection opened. http://192.168.**.***:3838/StockVis/"
Thu Feb 16 2017 12:20:49 GMT+0800 (CST) [DBG]: Open channel 0
The application unexpectedly exited.
Diagnostic information is private. Please ask your system admin for permission if you need to check the R logs.
**Thu Feb 16 2017 12:20:50 GMT+0800 (CST) [INF]: Connection closed. Info: {"type":"close","code":4503,"reason":"The application unexpectedly exited","wasClean":true}
Thu Feb 16 2017 12:20:50 GMT+0800 (CST) [DBG]: SockJS connection closed
Thu Feb 16 2017 12:20:50 GMT+0800 (CST) [DBG]: Channel 0 is closed
Thu Feb 16 2017 12:20:50 GMT+0800 (CST) [DBG]: Removed channel 0, 0 left*****
Please kindly give some suggestions to move on.
This can indicate something in your R code is causing an error. As that R error could be anything, this answer is to help you gather that info. The browser console messages will not tell you what that is. In order to access the error, you need to configure Shiny to not delete the log upon exiting the application.
Assuming you have sudo access:
$ sudo vi /etc/shiny-server/shiny-server.conf
Place the following line in the file after run_as shiny; :
preserve_logs true;
Restart shiny:
sudo systemctl restart shiny-server
Reload your Shiny app.
In the var/log/shiny-sever/ directory there will be a log file with your application name. Viewing that file will give you more information on what is going on.
Warning. After you are done, take out the preserve_logs true; line in the conf file and restart Shiny. If not, you will start generating a bunch of log files you don't want.

Elastux: Apache 500 eror after fresh install

After installing the Elastix 4.0 distro, I always have a broken web-admin panel. What am I doing wrong?
/var/log/httpd/ssl_error_log
[Fri Oct 07 22:34:08.636204 2016] [:error] [pid 4510] [client 192.168.88.88:50414] PHP Fatal error: Uncaught --> Smarty: unable to write file /var/www/html/var/templates_c/wrt57f7eaa09b4319_91812285 <-- \n thrown in /usr/share/php/Smarty/sysplugins/smarty_internal_write_file.php on line 46
You'll need to make the /var/www/html/var/templates_c directory writable by the user the web server is running under. For example:
chmod -R +w /var/www/html/var/templates_c
Apache should work under asterisk user.
/var/www/html/ and /var/lib/php should be changed to same owner.

Japserserver HTTP 404 error

Could someone please help me to figure out this issue.I have installed the japserserver trial version for windows xp.The tomcat server seems to work fine.But when I try to connect to the japserserver i get a HTTP 404 error.
SEVERE: A web application appears to have started a TimerThread named
[adhocCache] via the java.util.Timer API but has failed to stop it.
To prevent a memory leak, the timer (and hence the enter code here associated thread)
has been forcibly cancelled.
Jul 15, 2013 4:22:04 PM org.apache.catalina.startup.HostConfig deployDescriptor
INFO: Server startup in 30926 ms
In the localhost file under tomcat logs i could also see some error as:
Caused by: net.sf.ehcache.config.InvalidConfigurationException: There is one error in your configuration:
* CacheManager configuration: You've assigned more memory to the on-heap than the VM can sustain, please adjust your -Xmx setting accordingly
`

Resources