trouble configure jradius on Radius Server - networking

I've installed and configure free radius on my ubuntu 12.04.
Now i want to configure my freeradius with jradius server.
I follow from coova documentation from here
http://www.coova.org/JRadius/FreeRADIUS
That step explain that i must add some configuration on etc/freeradius/radiusd.conf and /etc/freeradius/sites-enabled/default.
Now when i try to running my radius server i got error message like bellow :
Thu Mar 7 11:56:26 2013 : Debug: server { # from file /etc/freeradius/radiusd.conf
Thu Mar 7 11:56:26 2013 : Debug: modules {
Thu Mar 7 11:56:26 2013 : Debug: Module: Checking authenticate {...} for more modules to load
Thu Mar 7 11:56:26 2013 : Debug: (Loaded rlm_digest, checking if it's valid)
Thu Mar 7 11:56:26 2013 : Debug: Module: Linked to module rlm_digest
Thu Mar 7 11:56:26 2013 : Debug: Module: Instantiating module "digest" from file /etc/freeradius/modules/digest
Thu Mar 7 11:56:26 2013 : Debug: Module: Checking authorize {...} for more modules to load
Thu Mar 7 11:56:26 2013 : Debug: (Loaded rlm_preprocess, checking if it's valid)
Thu Mar 7 11:56:26 2013 : Debug: Module: Linked to module rlm_preprocess
Thu Mar 7 11:56:26 2013 : Debug: Module: Instantiating module "preprocess" from file /etc/freeradius/modules/preprocess
Thu Mar 7 11:56:26 2013 : Debug: preprocess {
Thu Mar 7 11:56:26 2013 : Debug: huntgroups = "/etc/freeradius/huntgroups"
Thu Mar 7 11:56:26 2013 : Debug: hints = "/etc/freeradius/hints"
Thu Mar 7 11:56:26 2013 : Debug: with_ascend_hack = no
Thu Mar 7 11:56:26 2013 : Debug: ascend_channels_per_line = 23
Thu Mar 7 11:56:26 2013 : Debug: with_ntdomain_hack = no
Thu Mar 7 11:56:26 2013 : Debug: with_specialix_jetstream_hack = no
Thu Mar 7 11:56:26 2013 : Debug: with_cisco_vsa_hack = no
Thu Mar 7 11:56:26 2013 : Debug: with_alvarion_vsa_hack = no
Thu Mar 7 11:56:26 2013 : Debug: }
Thu Mar 7 11:56:26 2013 : Error: /etc/freeradius/radiusd.conf[644]: Failed to link to module 'rlm_jradius': file not found
Thu Mar 7 11:56:26 2013 : Error: /etc/freeradius/sites-enabled/default[71]: Failed to load module "jradius".
Thu Mar 7 11:56:26 2013 : Error: /etc/freeradius/sites-enabled/default[62]: Errors parsing authorize section.
What should i do to solve this problem ?
Thanks

Check if compile with the parameter "--with-rlm_jradius" while installing FreeRADIUS.
Check if instantiate the jradius in instantiate section in radiusd.conf
Check if set up your jradius config well in your radiusd.cong or in another .conf file.
These are the most common factors when your encounter troubles while installing them.
Okis

rlm_jradius has not been marked as stable, and so is not available in the standard debian packages for the v2.x.x branch.
https://github.com/FreeRADIUS/freeradius-server/blob/v2.x.x/src/modules/stable
You will need to build the module from source. Substitute the git checkout line for the version of the server you are using.
git clone git#github.com:FreeRADIUS/freeradius-server.git
git checkout release_2_2_0
./configure --with-rlm_jradius
cd src/modules/rlm_jradius
make
cp -r .libs/rlm_jradius*.so /usr/lib/freeradius/
cp jradius.conf /etc/freeradius/modules/
Note that rlm_jradius is not currently available in FreeRADIUS version 3.0, and that the v2.2.x branch will soon be deprecated.

In case others run across this thread, JRadius support is included in the freeradius package for Ubuntu 12.10 - Ubuntu 16. It was removed in Ubuntu 17, since they updated to version 3 of FreeRADIUS, and the JRadius support was not ported to version 3.
https://packages.ubuntu.com/search?keywords=freeradius

Related

Debian 9 / Apache 2.4 / Radicale 2.1 / uWSGI

I'm trying to use Radicale via uWSGI and Apache.
After some struggle, I managed to use WSGI for radicale on Apache but I would like to offload the authentication to Apache.
So I created the apache conf as
<VirtualHost *:80>
ServerAdmin xxx#gmail.com
ServerName radicale.domain.com
ProxyPass / uwsgi://127.0.0.1:5232/
<Directory "/etc/radicale">
AllowOverride None
Require all granted
</Directory>
TransferLog /var/log/apache2/radicale_access.log
ErrorLog /var/log/apache2/radicale_error.log
</VirtualHost>
My uwsgi app is
[uwsgi]
http-socket = 127.0.0.1:5232
processes = 2
plugin = python3
#module = radicale
wsgi-file=/etc/radicale/radicale.wsgi
env = RADICALE_CONFIG=/etc/radicale/config
When I call http://radicale.domain.com, I get a generic 500 error but I can't see any errors in the apache error log or the uswgi log.
The uwsgi log shows (in verbose)
Thu May 7 17:40:39 2020 - *** Starting uWSGI 2.0.14-debian (64bit) on [Thu May 7 17:40:39 2020] ***
Thu May 7 17:40:39 2020 - compiled with version: 6.3.0 20170516 on 17 March 2018 15:41:47
Thu May 7 17:40:39 2020 - os: Linux-2.6.32-042stab128.2 #1 SMP Thu Mar 22 10:58:36 MSK 2018
Thu May 7 17:40:39 2020 - nodename: xxx
Thu May 7 17:40:39 2020 - machine: x86_64
Thu May 7 17:40:39 2020 - clock source: unix
Thu May 7 17:40:39 2020 - pcre jit disabled
Thu May 7 17:40:39 2020 - detected number of CPU cores: 8
Thu May 7 17:40:39 2020 - current working directory: /
Thu May 7 17:40:39 2020 - writing pidfile to /run/uwsgi/app/radicale/pid
Thu May 7 17:40:39 2020 - detected binary path: /usr/bin/uwsgi-core
Thu May 7 17:40:39 2020 - setgid() to 33
Thu May 7 17:40:39 2020 - set additional group 125 (redis)
Thu May 7 17:40:39 2020 - set additional group 5003 (ispapps)
Thu May 7 17:40:39 2020 - set additional group 5004 (ispconfig)
Thu May 7 17:40:39 2020 - setuid() to 33
Thu May 7 17:40:39 2020 - your processes number limit is 256137
Thu May 7 17:40:39 2020 - your memory page size is 4096 bytes
Thu May 7 17:40:39 2020 - detected max file descriptor number: 131072
Thu May 7 17:40:39 2020 - lock engine: pthread robust mutexes
Thu May 7 17:40:39 2020 - thunder lock: disabled (you can enable it with --thunder-lock)
Thu May 7 17:40:39 2020 - uwsgi socket 0 bound to UNIX address /run/uwsgi/app/radicale/socket fd 3
Thu May 7 17:40:39 2020 - uwsgi socket 1 bound to TCP address 127.0.0.1:5232 fd 5
Thu May 7 17:40:39 2020 - Python version: 3.5.3 (default, Sep 27 2018, 17:25:39) [GCC 6.3.0 20170516]
Thu May 7 17:40:39 2020 - *** Python threads support is disabled. You can enable it with --enable-threads ***
Thu May 7 17:40:39 2020 - Python main interpreter initialized at 0x7fc12c963dd0
Thu May 7 17:40:39 2020 - your server socket listen backlog is limited to 100 connections
Thu May 7 17:40:39 2020 - your mercy for graceful operations on workers is 60 seconds
Thu May 7 17:40:39 2020 - mapped 218304 bytes (213 KB) for 2 cores
Thu May 7 17:40:39 2020 - *** Operational MODE: preforking ***
Thu May 7 17:40:39 2020 - WSGI app 0 (mountpoint='') ready in 0 seconds on interpreter 0x7fc12c963dd0 pid: 23261 (defau
lt app)
Thu May 7 17:40:39 2020 - *** uWSGI is running in multiple interpreter mode ***
Thu May 7 17:40:39 2020 - spawned uWSGI master process (pid: 23261)
Thu May 7 17:40:39 2020 - spawned uWSGI worker 1 (pid: 23267, cores: 1)
Thu May 7 17:40:39 2020 - spawned uWSGI worker 2 (pid: 23268, cores: 1)
How can I debug uwsgi? How can I see why Apache returns the 500 error? Have I done anything wrong with the conf - I find the docs not very useful when it comes to error debugging or understanding how to define modules
Okay, after a week of contemplating, debugging and some swearing, I saw my quite stupid mistake :(
I configured a HTTP socket in UWSGI
http-socket = 127.0.0.1:5232
but specified the uwsgi protocol in Apache ...
ProxyPass / uwsgi://127.0.0.1:5232/

Faking date/time of child process

On Unix systems, is there a way to fake the perceived date and time of a child process?
I.e., imagine:
$ date
Fri Jun 28 10:50:35 CEST 2019
$ with_date 10/05/2019 date
Fri May 10 10:50:36 CEST 2019
How to implement the with_date command?
The typical use case would be the testing of date/time-related software, simulating various conditions.
There is the library libfaketime. It uses a library preload mechanism to intercept system calls of the to-be-run programs. A use-case (from the manual) is:
user#host> date
Tue Nov 23 12:01:05 CEST 2016
user#host> LD_PRELOAD=/usr/local/lib/libfaketime.so.1 FAKETIME="-15d" date
Mon Nov 8 12:01:12 CEST 2016
user#host> LD_PRELOAD=/usr/local/lib/libfaketime.so.1 FAKETIME="-15d"
FAKETIME_DONT_FAKE_MONOTONIC=1
java -version
java version "1.8.0_111"
Java(TM) SE Runtime Environment (build 1.8.0_111-b14) Java HotSpot(TM)
64-Bit Server VM (build 25.111-b14, mixed mode)

Symfony 3.4 WebServerBundle PHP version

I have both php7.0 and php7.1 installed on Ubuntu.
Both CLI and Apache are switched to use 7.1
php -v
PHP 7.1.14-1+ubuntu16.04.1+deb.sury.org+1 (cli) (built: Feb 9 2018
09:33:27) ( NTS )
but the built-in http server of Symfony still uses 7.0
PHP Version 7.0.27-1+ubuntu16.04.1+deb.sury.org+1
System Linux spring.home.lan 4.4.0-112-generic #135-Ubuntu SMP Fri
Jan 19 11:48:36 UTC 2018 x86_64
Build Date Jan 5 2018 14:12:46
Server API Built-in HTTP server
Any suggestions what is wrong?

Reset username and password for JFrog

I installed JFrog standalone version on Ubuntu. I dont know my JFrog username and password. I also checked /usr/lib/apache-tomcat-8.5.16/conf/server.xml file but it is does not have any username and password. I also clicked on set me up, but the commandline interface to push an artifact is also prompting for username and password.
ravi#ravi-Inspiron-5537:~$ systemctl status artifactory.service
● artifactory.service - Setup Systemd script for Artifactory in Tomcat Servlet E
Loaded: loaded (/lib/systemd/system/artifactory.service; enabled; vendor pres
Active: active (running) since Fri 2017-08-11 10:11:41 EDT; 37min ago
Process: 16482 ExecStart=/opt/jfrog/artifactory/bin/artifactoryManage.sh start
Main PID: 16532 (java)
CGroup: /system.slice/artifactory.service
‣ 16532 /usr/bin/java -Djava.util.logging.config.file=/opt/jfrog/arti
Aug 11 10:11:17 ravi-Inspiron-5537 su[16508]: Successful su for artifactory by r
Aug 11 10:11:17 ravi-Inspiron-5537 su[16508]: + ??? root:artifactory
Aug 11 10:11:17 ravi-Inspiron-5537 su[16508]: pam_unix(su:session): session open
Aug 11 10:11:18 ravi-Inspiron-5537 artifactoryManage.sh[16482]: Max number of op
Aug 11 10:11:18 ravi-Inspiron-5537 artifactoryManage.sh[16482]: Using ARTIFACTOR
Aug 11 10:11:18 ravi-Inspiron-5537 artifactoryManage.sh[16482]: Using ARTIFACTOR
Aug 11 10:11:18 ravi-Inspiron-5537 artifactoryManage.sh[16482]: Creating directo
Aug 11 10:11:18 ravi-Inspiron-5537 artifactoryManage.sh[16482]: Tomcat started.
Aug 11 10:11:41 ravi-Inspiron-5537 artifactoryManage.sh[16482]: Artifactory Tomc
Aug 11 10:11:41 ravi-Inspiron-5537 systemd[1]: Started Setup Systemd script for
lines 1-18/18 (END)
The default username and password of Artifactory are:
User: admin
Pass: password

flex amf request not sent out anymore in AIR on Windows

I have a Flex application that runs in the browser and as standalone application with AIR. In our application, you can download a zip file which is generated at the server side containing a report. This zip file can be quite big (multiple GB) and takes a while to download. Basically, we do a HTTP POST from the client side:
m_file = new FileReference();
m_file.download( request, filename );
On the other hand, we send a ping from the client to the server each 15 seconds, to make sure the server is still there. The server responds to the ping with his name. When we do not get a response back in 30 seconds, we show the user a message that the server is down.
Now, the actual problem is that the ping request to the server never gets send while we are downloading the report. The strange thing is that this only happens in Adobe AIR on Windows. It is not a problem on Firefox on Windows. It is also not a problem with AIR on Mac OS X.
I have put some logging around the ping operation to show it:
[trace] 1: Mon Jul 16 16:00:20 GMT+0200 2012
[trace] return : 1: Mon Jul 16 16:00:20 GMT+0200 2012
[trace] 2: Mon Jul 16 16:00:35 GMT+0200 2012
[trace] return : 2: Mon Jul 16 16:00:35 GMT+0200 2012
[trace] 3: Mon Jul 16 16:00:50 GMT+0200 2012
[trace] return : 3: Mon Jul 16 16:00:50 GMT+0200 2012
[trace] 4: Mon Jul 16 16:01:05 GMT+0200 2012
[trace] 5: Mon Jul 16 16:01:20 GMT+0200 2012
[trace] 6: Mon Jul 16 16:01:35 GMT+0200 2012
[trace] 7: Mon Jul 16 16:01:50 GMT+0200 2012
[trace] fault 4: Mon Jul 16 16:02:05 GMT+0200 2012
[trace] fault 5: Mon Jul 16 16:02:05 GMT+0200 2012
[trace] fault 6: Mon Jul 16 16:02:05 GMT+0200 2012
[trace] fault 7: Mon Jul 16 16:02:05 GMT+0200 2012
[trace] 8: Mon Jul 16 16:02:05 GMT+0200 2012
[trace] fault 8: Mon Jul 16 16:02:20 GMT+0200 2012
[trace] 9: Mon Jul 16 16:02:21 GMT+0200 2012
Player session terminated
[AIR Debug Launcher]: Process finished with exit code 1
You can see that operation 1, 2 and 3 ran fine and returned immediately. At that point, we start the report download. Notice how operation 4 does not return immediately. After 15 seconds, operation 5 starts, then 6 and 7, each 15 seconds apart.
Then suddenly, exactly after 1 minute, operation 4 returns into the fault handler as does all the other requests that where started during that 1 minute.
Note that in the background, the HTTP POST keeps running as long as the AIR app keeps running (checked with Charles). The ping request itself does not show up in charles, nor do I see them in the BlazeDS debug loggin if I enable that. It is like the AIR application never even tries to do the ping request on the server.
Is there anybody who has an idea what might be wrong? Any additional things I can check/debug?
I am using Flex SDK 4.5 and Adobe AIR 3.3
Maybe this is due to a HTTP connection limit on Windows. I found this post that describes a similar issue: Adobe AIR HTTP Connection Limit

Resources