Why is my controller being called twice from an email link - asp.net-core-webapi

Been struggling with this for hours.
This is a asp.net core 3.0 app.
It emails an activation link.
I then pick up that email in my inbox and click on it.
This link is:
https://apc01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.myserver.com%2FAccount%2FActivate%3Fpin%3DwDiC3S&data=02%7C01%7C%7C8311079d8b314d288f7a08d77e73c924%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C637116908072496731&sdata=y80XhRTBI%2FOJq6UrN8Yw%2B3nWDjrb96IprWR2IKIouVU%3D&reserved=0
It then emails me a confirmation message 9#by calling that view). But it does this twice.
What is weird is that if I copy and past this url from 1 browser to another I just receive the expected 1 confirmation email.
The only difference I can see is that the safelink stuff add si removed from copying and pasting that link. ie:
https://www.myserver.com/Account/Activate?pin=wDiC3S
I do not know how to debug this?

I have just come across this situation. It has taken me a few days to track down. I have a link within an outlook message and I wrote code to make sure it was not called twice. This code was activated the first time I called it. Eventually I realised that the email client must be calling it twice. I'm going to have to write a work around.
I'm glad I found it. It was driving me mad!
I would consider this to be an outlook bug.

I had a similar problem a while back that seemed to be tied to the email client pre-fetching and visiting the link. You may want to try a different email provider and client as part of your debugging efforts.
Here is a related question on the topic -
How to stop e-mail clients from visiting links in e-mail automatically?

I have this issue couple of days already. With trial and error i just found out the solution with my own testing, you just need to set-up the SMTP configuration with your outlook account.
Using PHPMailer
I have this configuration before
$trans = Swift_SmtpTransport::newInstance()
->setHost("xxxxxxxxxxx.mail.protection.outlook.com")
->setPort(25);
I changed to
$trans = Swift_SmtpTransport::newInstance('smtp.office365.com', 587, 'tls')
->setUsername('xxxxx#xxxxxxx')
->setPassword('xxxxxxxxxxxx');
Now, it's working properly. I think ATP aka SafeLink Protection Feature will work only if the email address(sender) is not verified within your organization.

Related

Website debugger to find parameters passed on user submission

I was wondering if there was a way to see what parameters or other information is passed upon submitting a form from another website of which you don't have any of the server code.
Here is the page I am trying to debug - https://umbc.t2hosted.com/cit/index.aspx.
When I put information into the fields, and submit it, there is not added data to the url like there would be in a regular get request. Is there any tool that can help me find out what parameters are actually passed so that I may simulate user requests with a program?
Thank you in advance with you help.
You can use a debugger proxy such as fiddler to see all the data that is sent from your machine to the website when doing the query.
This will allow you to see the HTTP messages sent from your browser to the website. Once you've seen and understood how the message are sent, it should be relatively easy to reproduce them with another program.

Drupal xmlrpc user.login suddenly fails

I am accessing a Drupal Views feed through xmlrpc. The script has worked in the past and my goal today was solely to access another feed. In theory, there was nothing to do except to change the name of the feed. The endpoint had not changed, my domain had not changed, I can log in to the remote site so my user credentials there are valid.
I am scratching my head as to what may have changed. Is there an obvious question that I have missed? What could have changed on the Drupal end that I should be taking into account?
I can also get a session id for an anonymous user okay.
The failure comes during the complicate authentication (that has worked in the past).
Any suggestions?
Thanks.
Ah... if anyone else has the same problem, as I worked through my script, printing out its effect at each line, I came across a comment I had made when I wrote it.
Make sure the client and the remote are on the same time, preferably the time provided by www.time.is.
My PC was running a minute slow. The detafult Resynchronise on Windows 7 runs at 1am on a Sunday. Change that to a more sensible time.
And for an immediate fix, change the PC time to within a few seconds of www.time.is.
That was the problem. Authenticated login uses a time stamp. It the remote server regards your time as too inaccurate, it will reject your login. Make sure the client is running with an accurate clock.

CRM 2011: Using Organization Service returns metadata reference issue

I'm using the Organization Service URI to upload documents to our SharePoint site from notes and attachments. I'm using the code found here and all is working apart from where i set the organizationURI. I get an error of "metadata contains a reference that cannot be resolved". I have tried retyping the link in and everything i can think of but i always get this error.
The strange thing is that this was working a couple of days ago just fine, but when i tried it the next morning it refused to work and now wont do anything at all. Before this error i have now i was getting an error saying that the URI scheme is not valid. I don't know what could have caused this to stop working but i've tried all i can think of and need some help.
Thanks
EDIT: The error message has changed to "A proxy type with the name account has been defined by another assembly". Still not sure what it means, but i'm hoping this might be easier to fix
I'm not sure if this is the actual fix for this problem but i tried this and it seemed to work. So either it is the answer or i was just lucky and something else changed too, but anyway...
What i did was to change the way that i was connecting to the organization service. Before i was using user credentials, organization URI and home realm uri together to get the OrganisationServiceProxy in the form of OrganizationServiceProxy orgService = new OrganizationServiceProxy(organizationUri, homeRealmUri, cred, null);.
Now i'm using a longer method of first setting the discovery service with user credentials. Then together with them i set the discovery service proxy, which is then authenticated. Then i simply use a RetrueveOrganizationRequest / Response to get the organization service which i can then use in place of the original.
Hope that makes sense to people but if anyone wants i can put some code up showing what i did.

Scraping ASP.NET with Python and urllib2

I've been trying (unsuccessfully, I might add) to scrape a website created with the Microsoft stack (ASP.NET, C#, IIS) using Python and urllib/urllib2. I'm also using cookielib to manage cookies. After spending a long time profiling the website in Chrome and examining the headers, I've been unable to come up with a working solution to log in. Currently, in an attempt to get it to work at the most basic level, I've hard-coded the encoded URL string with all of the appropriate form data (even View State, etc..). I'm also passing valid headers.
The response that I'm currently receiving reads:
29|pageRedirect||/?aspxerrorpath=/default.aspx|
I'm not sure how to interpret the above. Also, I've looked pretty extensively at the client-side code used in processing the login fields.
Here's how it works: You enter your username/pass and hit a 'Login' button. Pressing the Enter key also simulates this button press. The input fields aren't in a form. Instead, there's a few onClick events on said Login button (most of which are just for aesthetics), but one in question handles validation. It does some rudimentary checks before sending it off to the server-side. Based on the web resources, it definitely appears to be using .NET AJAX.
When logging into this website normally, you request the domian as a POST with form-data of your username and password, among other things. Then, there is some sort of URL rewrite or redirect that takes you to a content page of url.com/twitter. When attempting to access url.com/twitter directly, it redirects you to the main page.
I should note that I've decided to leave the URL in question out. I'm not doing anything malicious, just automating a very monotonous check once every reasonable increment of time (I'm familiar with compassionate screen scraping). However, it would be trivial to associate my StackOverflow account with that account in the event that it didn't make the domain owners happy.
My question is: I've been able to successfully log in and automate services in the past, none of which were .NET-based. Is there anything different that I should be doing, or maybe something I'm leaving out?
For anyone else that might be in a similar predicament in the future:
I'd just like to note that I've had a lot of success with a Greasemonkey user script in Chrome to do all of my scraping and automation. I found it to be a lot easier than Python + urllib2 (at least for this particular case). The user scripts are written in 100% Javascript.
When scraping a web application, I use either:
1) WireShark ... or...
2) A logging proxy server (that logs headers as well as payload)
I then compare what the real application does (in this case, how your browser interacts with the site) with the scraper's logs. Working through the differences will bring you to a working solution.

Duplicate Email notifications on Mercury Pressflow (drupal)

We’re running into an issue sending duplicate notifications to our users using the Notifications module on our Mercury Pressflow implementation. The duplicate messages are identical save one thing- the [node-url] token is being replaced with ‘default’ in one of the messages. All the other tokens in the message are being replaced correctly.
The duplicate emails do not happen consistently, maybe 10-15% of the notifications sent out, however a duplicate message always has the proper url & the ‘default’ url.
The only major modification we’ve made to Mercury was spinning off MySQL to it’s own server and adding replication. We currently have the reads set up to round robin between the 2 MySQL instances.
I have done the following troubleshooting based on finding similar issues
made sure the cron job is calling the correct url
replaced all configurations named ‘default’ with the site name (Memcached, Varnish, and Apache configs)
disabled caching in an init_hook in the notifications module
Has anyone out there experienced anything similar with Notifications and Mercury? Any and all advice is greatly appreciated.
The "Mercury" stack is external to Drupal and doesn't affect how email is queued or sent. Something within your messaging/notifications configuration or use is causing multiple messages to be created.
If you have any custom code here, I would look at that and try to trace the token variance.

Resources