I'm using the Cloudtrax HTTP Authentication API
to create a custom logic for router authentication that has a captive portal.
When the router asks for a status request, the server responds the following:
"CODE" "ACCEPT"
"RA" "1c65684265a2bb1a7c87e4d9565c2b18"
"SECONDS" "3600"
"DOWNLOAD" "2000"
"UPLOAD" "800"
Which should be de correct format of an answer to login the user. The problem is that the captive portal is still present.
I don't know what could be the problem and I can't find a log on the router or cloudtrax to see what could be wrong.
Edit:
I am processing the RA string on django (python):
import hashlib
def calculate_ra(request, response):
code = response.get('CODE')
if not code:
return ''
previus_ra = request.GET.get('ra')
if not previus_ra:
return ''
if len(previus_ra) != 32:
return ''
previus_ra = previus_ra.decode('hex')
m = hashlib.md5()
m.update('{}{}{}'.format(code, previus_ra, SECRET))
response['RA'] = m.hexdigest()
Use SSH to log into the router and enable debug mode by sending these commands:
uci set http_portal.general.syslog=debug
uci commit
/etc/init.d/underdogsplash restart
Then use logread -f to view logs in realtime.
Don't forget to disable the debug mode when you're done:
uci set http_portal.general.syslog=err
uci commit
/etc/init.d/underdogsplash restart
Related
I was trying to prune some users from my nats server by doing:
nsc push --system-account SYS -u nats://localhost:4222 -P
but I got the following error:
server nats-comm-2 responded with error: delete accounts request by SOME_KEY_VALUE failed - delete must be enabled in server config
The meaning of the error is pretty obvious, when I examine the help documentation for nsc push -P:
Only works with nats-resolver enabled nats-server. Mutually exclusive of account-removal/diff
But I'm not sure how to enable this in my nats server config. How do I allow for account pruning?
I found documentation in the resolver section, here, showing that I could add allow_delete: true to the config, but as the YAML format is in camel-case, I had to modify it to be allowDelete: true instead.
nats:
auth:
enabled: true
resolver:
type: full
allowDelete: true
I'm currently trying to configure our WSO2 API Manager 3.2 to use our SSL certificate.
I followed the documentation "Creating a New Keystore" and "Configuring Keystores in API Manager".
I have updated the deployment.toml file:
[server]<br>
hostname = "myserver001.internal.net"
....
[keystore.tls]
file_name = "myKeystore.jks"
type = "JKS"
password = "secretpassword"
alias = "myserver001.internal.net"
key_password = "secretpassword"
[keystore.primary]
file_name = "wso2carbon.jks"
type = "JKS"
password = "wso2carbon"
alias = "wso2carbon"
key_password = "wso2carbon"
The servername is set to myserver001.
The domain name myserver001.internal.net is set in the host file.
After restarting the WSO2 APIM server an exception message is thrown:
SSLException: hostname in certificate didn't match:
<localhost> != <myserver001.internal.net> OR <myserver001.internal.net>
Does anyone knows what I have to change additionally, to come around this error or where I can find additional documentation?
Any help is appreciated
Looks this is due to the missing service_url of TM/Event hub config. So can you add/update the following config?
[apim.throttling]
...
service_url = "https://myserver001.internal.net:9443/services/"
I was in the exact same situation :
Migration from v2.6.0 to v3.2.0
Valid certificate
SSLException: hostname in certificate didn't match
I tried a lot of things, but after a lot of researches, I'd say that there is two possibilities :
It's a bug
It's a documentation problem, and we have to edit more things to make our certificate work
I believe that you changed the Dhttpclient.hostnameVerifier to another value than "AllowAll". See the doc about hostname verification.
It's just a workaround and it's probably not that secure, but you'll have to put back the default value for Dhttpclient.hostnameVerifier to avoid this error :
service wso2am-3.2.0 stop
nano /usr/lib/wso2/wso2am/3.2.0/bin/wso2server.sh
-Dhttpclient.hostnameVerifier=AllowAll \
service wso2am-3.2.0 start
I am trying to setup mitmproxy so that I can make a request from my browser to https://{my-domain} and have it return a response from my local server running at http://localhost:3000 instead, but I cannot get the https request to reach my local server. I see the debugging statements from mitmproxy. Also, I can get it working for http traffic, but not for https.
I read the mitmproxy addon docs and api docs
I've installed the cert and I can monitor https through the proxy.
I'm using Mitmproxy: 4.0.4 and Python: 3.7.4
This is my addon (local-redirect.py) and how I run mitmproxy:
from mitmproxy import ctx
import mitmproxy.http
class LocalRedirect:
def __init__(self):
print('Loaded redirect addon')
def request(self, flow: mitmproxy.http.HTTPFlow):
if 'my-actual-domain-here' in flow.request.pretty_host:
ctx.log.info("pretty host is: %s" % flow.request.pretty_host)
flow.request.host = "localhost"
flow.request.port = 3000
flow.request.scheme = 'http'
addons = [
LocalRedirect()
]
$ mitmdump -s local-redirect.py | grep pretty
When I visit the url form my server, I see the logging statement, but my browser hangs on the request and there is no request made to my local server.
The above addon was fine, however my local server did not support HTTP2.
Using the --no-http2 option was a quick fix:
mitmproxy -s local-redirect.py --no-http2 --view-filter localhost
or
mitmdump -s local-redirect.py --no-http2 localhost
I am trying to to do a POST to an API endpoint using Openedge.
I have installed the ssl certificate of the place i am requesting from but the https request fails, telling me it can't find the ssl certificate of that place (in my /usr/dlc/certs).
"_errors": [
{
"_errorMsg": "ERROR condition: Secure Socket Layer (SSL) failure. error code -54: unable to get local issuer certificate: for 85cf5865.0 in /usr/dlc/certs (9318) (7211)",
"_errorNum": 9318
}
]
So, i have resorted to doing an insecure request, like curl does it with the --insecure or wget does it with "no-check-certificate"
I am using the OpenEdge.Net Libraries on OpenEdge 11.6
creds = new Credentials('https://xxxx.com', 'usersname', 'password').
oPayload = NEW JsonObject().
oRequestBody = new String('CustomerReference=xxx&NoOfParcelsToAdd=2').
oRequest = RequestBuilder:Post('https://xxxxx.com/endpoint', oRequestBody)// Add credentials to the request
:UsingBasicAuthentication(creds)
:ContentType('application/x-www-form-urlencoded')
:AcceptJson() :Request.
oResponse = ClientBuilder:Build():Client:Execute(oRequest).
I want to know, for this OpenEdge.Net Libraries is there a tag that i can put in order to skip the checking of the certificate?
I don't know of any option to skip verification but I do know that a common source of that error is that your certificate authority is not in $DLC/certs. The default list of certificate authorities is fairly narrow.
USING OpenEdge.Net.HTTP.IHttpClientLibrary.
USING OpenEdge.Net.HTTP.Lib.ClientLibraryBuilder.
DEFINE VARIABLE oLib AS IHttpClientLibrary NO-UNDO.
oLib = ClientLibraryBuilder:Build()
:sslVerifyHost(NO)
:Library.
oHttpClient = ClientBuilder:Build()
:UsingLibrary(oLib)
:Client.
I'm trying to deploy Openstack Icehouse on Ubuntu Server 14.04 by following the official document. But after Keystone\Nova\Neutron\Glance were deployed, when I tried to launch a CirrOS instance by
nova boot -nic ... -image ... -flavor ...
, it failed.
The log in Nova client shows that:
The Neutron client(Yes, it's neutron. I guess there are interactions between them in booting) tried to connect with Neutron server to create a port on tenant's network.
But Neutron client set up the token-getting request using {username:neutron, password:REDACTED} to Keystone server and used that token to request for creating port to Neutron server.
Finally, the Neutron Server decided that that's an authentication problem.
I'm sure that I requested to create instance using tenant 'demo''s info($OS_TENANT_NAME, $OS_USERNAME, $OS_PASSWORD, $OS_AUTH_URL were properly set with 'demo''s value) by
source demoopenrc.sh
with demo's credential in that file.
Is that something wrong in the Neutron client's configuration or booting process? I paste a part of the neutron.conf here:
the Keystone setting
[keystone_authtoken]
auth_uri = http://controller:5000
auth_host = controller
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = neutronpass
signing_dir = $state_path/keystone-signing
Since the Neutron client used 'neutron' user's credential for token getting, is there something wrong in this part?
The problem has been solved after nearly a month. For anyone still interested in this problem, please visit here