ADFS doesn't have P3P policy - adfs

I have application that uses SAML authentication, we have installed AD FS 3.0 on 2012 R2 machine. I think users do get authenticated but there is an issue with it as my application returns error, here is response header that I get:
HTTP/1.1 200 OK
Cache-Control: no-cache,no-store
Pragma: no-cache
Content-Length: 5851
Content-Type: text/html; charset=utf-8
Expires: -1
Server: Microsoft-HTTPAPI/2.0
P3P: CP="ADFS doesn't have P3P policy, please contact your site's admin for more details."
Set-Cookie: MSISAuthenticated=OC8xOC8yMDE1IDI6NTg6MzQgUE0=; path=/adfs; HttpOnly; Secure
Set-Cookie: MSISLoopDetectionCookie=MjAxNS0wOC0xODoxNDo1ODozNFpcMQ==; path=/adfs; HttpOnly; Secure
Date: Tue, 18 Aug 2015 14:58:34 GMT
Now what the problem is with this, to my understanding user does get authenticated, but yet my application fails to continue. Searching google I found this link, but this KB is installed on ADFS server. I believe due to P3P error, this is failing. Any suggestions?

Found this in a forum hopefully it works for some of you:
Run theses commands (this is what ultimately worked):
On TptDevADFS1 (server with ADFS 3 installed).
Used this command file on TptDevADFS1:
SETLOCAL
SET cert_folder=%HOMEPATH%\Documents\Certificates
IF NOT EXIST "%cert_folder%" md "%cert_folder"
SET sdk_folder=C:\Program Files (x86)\Windows Kits\8.1\bin\x64
IF NOT EXIST "%sdk_folder%" ECHO SDK FOLDER %sdk_folder% NOT FOUND.
IF NOT EXIST "%sdk_folder%" EXIT
CD "%sdk_folder%"
echo makecert -r -pe -n "CN=*.TptDev.com" -ss my -sr LocalMachine -eku "1.3.6.1.5.5.7.3.1","1.3.6.1.4.1.311.10.3.12" -len 2048 -sky exchange -e "01/01/2021" "%cert_folder%\TptDev.com_%COMPUTERNAME%_wildcard_exchDocSign.cer"
ENDLOCAL
Resulted in this command and output:
C:\Program Files (x86)\Windows Kits\8.1\bin\x64>makecert -r -pe -n "CN=*.TptDev.com" -ss my -sr LocalMachine -eku "1.3.6.1.5.5.7.3.1","1.3.6.1.4.1.311.10.3.12" -len 2048 -sky exchange -e "01/01/2021" "\Users\Administrator.TPTDEV\Documents\Certificates\TptDev.com_TPTDEVADFS1_wildcard_exchDocSign.cer"
Succeeded
C:\Program Files (x86)\Windows Kits\8.1\bin\x64>
The above command imported the certificate into
(Local Computer) Personal->Certificates (aka as certificate store “My”).
Then browse to certificate file and imported it (with exportable key) to
(Local Computer) Trusted Root Certificate Authorities->Certificates
Export key in Personal store as PFX file with options:
include private key, include all certs in chain, export all extended properties.
Copy file to TptDevCRM1 (Server Dynamics CRM 2015 is installed on).
On TptDevCRM1 (server with Dynamics CRM 2015 installed)
Imported PFX certificate (file) into (Local Computer) Personal->Certificates.
Imported PFX certificate (file) into (Local Computer) Trusted Root Certificate Authorities->Certificates

Related

I cant access from host machine to virtual machine webserver domain in Debian 11

I have installed an apache server on a debian 11 on a virtualbox machine.
I have set the static ip to 192.168.1.69, I have added access rules to the firewall to let traffic through on port 80.
I can see the default debian webpage if I go to the windows browser and go to 192.168.1.69, I have even configured it to install webmin on 192.168.1.69:10000, which works on firefox in the host machine, and I can access it from the windows powershell in the host machine by sshing to 192.168.1.69.
I created a server at /var/www/tests.dev with a /public directory with an index.html inside with a "You made it"
I have made
chown -R www-data /var/www/tests.dev
This is the virtual server configuration:
ServerName tests.dev
ServerAdmin webmaster#tests.dev
ServerAlias www.tests.dev
DocumentRoot /var/www/tests.dev/public
CustomLog /var/log/apache2/access-tests.dev.log "combined".
I have placed in the windows hosts of the host machine the line
192.168.1.69 tests.dev
If I ping from windows powershell to tests.dev it works, if I ssh user#tests.dev it works and I can log in.
But when I do it in firefox, in the host machine, if i go to http://tests.dev there is no way, the only thing I get is "the connection has expired", I don't know what I can try anymore, I have tried everything.
But... and this is the interesing thing... if i do a wget http://tests.dev in powershell I get
StatusCode : 200
StatusDescription : OK
Content : <html>
<h1>You made it</h1>
</html>
RawContent : HTTP/1.1 200 OK
Keep-Alive: timeout=5, max=100
Connection: Keep-Alive
Accept-Ranges: bytes
Content-Length: 36
Content-Type: text/html
Date: Fri, 27 Jan 2023 14:49:00 GMT
ETag: "24-5f33de871d523...
Forms : {}
Headers : {[Keep-Alive, timeout=5, max=100], [Connection, Keep-Alive], [Accept-Ranges, bytes],
[Content-Length, 36]...}
Images : {}
InputFields : {}
Links : {}
ParsedHtml : mshtml.HTMLDocumentClass
RawContentLength : 36
Can anyone help me why cant access on firefox on tests.dev on my host machine? Thank you.
Ok, the problem as i was figuring out when i writing the question, is firefox, doesnt allow http requests, i downloaded seamonkey and tried "http://tests.dev" now i see the page with the "you made it"

Infinite redirect in Symfony

I'm setting up a test environment for our Symfony web site. I have a basic version working on my Windows machine for development, but trying to set up an AWS replica of the production web site as test causes all the valid pages to end up in an infinite 301 redirect. I'm guessing I've missed something in the configuration.
Symfony 2.8
AWS Ubuntu server
SSL enabled, and non-Symfony files served correctly
This is the raw response header for /app/dashboard:
HTTP/1.1 301 Moved Permanently
Cache-Control: no-cache
Content-Type: text/html; charset=UTF-8
Date: Mon, 06 Nov 2017 05:43:18 GMT
Location: https://****.***/app/dashboard
Server: Apache/2.4.7 (Ubuntu)
X-Powered-By: PHP/5.5.9-1ubuntu4.22
Content-Length: 424
Connection: keep-alive
The Apache, PHP and Symfony /app/config/parameters.yml configurations are identical to the Production and Dev environments, except for server addresses. Composer has been run to download all the project dependencies.
Production and Dev both work fine. It's only Test that has the infinite redirect loops.
I'm sure there's something simple I've overlooked but I can't find it.
UPDATE 8 Nov 2017
/app_dev.php works, but /app.php has the infinite loop.
GET /app.php (https)
Location: http://****.***/
GET / (http)
Location: http://****.***/app/dashboard
GET /app/dashboard (http)
Location: https://****.***/app/dashboard
GET /app/dashboard (https)
Location: https://****.***/app/dashboard
Unfortunately, the problem went away "by itself". I had tried a few things to debug it that didn't work, so I used rsync to reset all the files back to a pristine state, at which point the problem stopped happening.
UPDATE
I managed to break it again somehow. I then remembered I'd changed the trusted_proxies in the configuration, and that got overwritten when I rsync'd it. Adding the correct settings back in fixed the problem!
framework:
...
trusted_hosts: ~
trusted_proxies: [<correct settings go in here>]
...

Artifactory - Generic Repo: Archive download failing

Scenario
Attempting to curl archive from generic repository in Artifactory which worked for me the past few days.
Code
curl -i -H 'X-JFrog-Art-Api: <api-key>' -XGET https://<host>/artifactory/api/archive/download/<repo-name>/<dir>?archiveType=zip -o <out-file>
Problem
Today I tried running my curl command again and I get the below error
HTTP/1.1 400 Bad Request
Date: Thu, 09 Mar 2017 13:49:14 GMT
Content-Type: application/json
Transfer-Encoding: chunked
Connection: keep-alive
Server: Artifactory/4.16.0
X-Artifactory-Id: <removed>
X-Artifactory-Node-Id: <removed>
{
"errors" : [ {
"status" : 400,
"message" : "There are too many folder download requests currently running, try again later."
} ]
}
Question
How can I resolve this, I have tried waiting it out but it's been more than 12 hours since I cannot pull down what I need?
This error message indicates that you have more than 10 concurrent download requests for folder archives. This is the default configuration, but it can be altered.
You can configure the max number of concurrent folder download in Admin > General Configuration > Folder Download Settings > Max Parallel Folder Downloads.

Getting 404 error if requesting a page through proxy, but 200 if connecting directly

I am developing an HTTP proxy in Java. I resend all the data from client to server without touching it, but for some URLs (for example this) server returns the 404 error if I am connecting through my proxy.
The requested URL uses Varnish caching, so it might be the root of problem. I cannot reconfigure it - it is not my.
If I request that URL directly with browser, the server returns 200 and the image is shown correctly.
I am stuck because I even do not know what to read and how to compose a search request.
Thanks a lot.
Fix the Host: header of the re-issued request. The request going out from the proxy either has no Host header or it is broken (or only X-Host exists). Also take note that the proxy application will execute its own DNS lookup and that might yield a different IP address than your local computer (where you issued the original request).
This works:
> curl -s -D - -o /dev/null http://212.25.95.152/w/w-200/1902047-41.jpg -H "Host: msc.wcdn.co.il"
HTTP/1.1 200 OK
Content-Type: image/jpeg
Cache-Control: max-age = 315360000
magicmarker: 1
Content-Length: 27922
Accept-Ranges: bytes
Date: Sun, 05 Jul 2015 00:52:08 GMT
X-Varnish: 2508753650 2474246958
Age: 67952
Via: 1.1 varnish
Connection: keep-alive
X-Cache: HIT

Programatically updating a VSTO Word Addin in IIS7.5

We have recently moved to a new web server (from IIS6 to IIS7.5) and I'm having some trouble updating our VSTO word addin.
Our app checks for updates manually when logging in and if a newer version has been found updates like this (let me know if there is a better way to do this - I've tried ApplicationDeployment.Update() but had no luck with it either!):
WebBrowser browser = new WebBrowser();
browser.Visible = false;
Uri setupLocation = new Uri("https://updatelocation.com/setup.exe");
browser.Url = setupLocation;
This used to launch the setup and update the app and when the user restarted word they would have the new version installed. Since the server move the update no longer happens. No exceptions are thrown. Browsing to the URL launches the updater as expected. What would I need to change to get this to work?
Note I have the following MIME types setup on the folder in IIS:
.application
application/x-ms-application
.manifest
application/x-ms-manifest
.deploy
application/octet-stream
.msu
application/octet-stream
.msp
application/octet-stream
.exe
application/octet-stream
Edit
OK I've had a look in fiddler and its returning a body size of -1:
If I enter the same URL in IE you can see that the setup.exe is launched without problems.
This is what fiddler displays in the raw view when accessing from word:
HTTP/1.1 200 OK
Content-Type: application/octet-stream
Last-Modified: Tue, 27 Sep 2011 15:07:42 GMT
Accept-Ranges: bytes
ETag: "9bd0c334277dcc1:0"
Server: Microsoft-IIS/7.5
X-Powered-By: ASP.NET
Date: Mon, 14 Nov 2011 07:42:18 GMT
Content-Length: 735608
MZ��������������������#������������������������������������������ �!�L�!This program cannot be run in DOS mode. $�������
*** FIDDLER: RawDisplay truncated at 128 characters. Right-click to disable truncation. ***
Have you tried a tool like (for instance) fiddler2 to see what http traffic is actually created?
Does the client make a server call? What does the server actually return?
Then:
Make the calls from within word (which isn't working)
Make the calls by hand (which is working)
Compare both the request and response packages from those calls to spot the differences

Resources