Unmasking not found errors and see their real exceptions in Plone - plone

The following is from Zope's BaseRequest.py:
# traverseName() might raise ZTK's NotFound
except (KeyError, AttributeError, ztkNotFound):
if response.debug_mode:
return response.debugError(
"Cannot locate object at: %s" % URL)
else:
return response.notFoundError(URL)
It translate various exceptions to not found page. This is very bad for the site developers, who don't know what actually goes wrong on the site.
How one does disable this mechanism (there is clearly response.debug_mode), so that you would see real exceptions
When Plone runs in debug mode
In unit tests and functional tests
When Plone runs in production mode (e.g temporarily to see why some URL really fails)

Related

Symfony logging with Monolog, confused about STDERR

I am trying to align my logging with the best practice of using STDERR.
So, I would like to understand what happens with the logs sent to STDERR.
Symfony official docs (https://symfony.com/doc/current/logging.html):
In the prod environment, logs are written to STDERR PHP stream, which
works best in modern containerized applications deployed to servers
without disk write permissions.
If you prefer to store production logs in a file, set the path of your
log handler(s) to the path of the file to use (e.g. var/log/prod.log).
This time I want to follow the STDERR stream option.
When I was writing to a specific file, I knew exactly where to look for that file, open it and check the logged messages.
But with STDERR, I don't know where to look for my logs.
So, using monolog, I have the configuration:
monolog:
handlers:
main:
type: fingers_crossed
action_level: error
handler: nested
excluded_http_codes: [404, 405]
nested:
type: stream
path: "php://stderr"
level: debug
Suppose next morning I want to check the logs. Where would I look?
Several hours of reading docs later, my understanding is as follows:
First, the usage of STDERR over STDOUT is preferred for errors because it is not buffered (gathering all output waiting for the script to end), thus errors are thrown immediately to the STDERR stream. Also, this way the normal output doesn't get mixed with errors.
Secondly, the immediate intuitive usage is when running a shell script, because in the Terminal one will directly see the STDOUT and STDERR messages (by default, both streams output to the screen).
But then, the non-intuitive usage of STDERR is when logging a website/API. We want to log the errors, and we want to be able to monitor the already occurred errors, that is to come back later and check those errors. Traditional practice stores errors in custom defined log-files. More modern practice recommends sending errors to STDERR. Regarding Symfony, Fabien Potencier (the creator of Symfony), says:
in production, stderr is a better option, especially when using
Docker, SymfonyCloud, lambdas, ... So, I now recommend to use
php://stderr
(https://symfony.com/blog/logging-in-symfony-and-the-cloud).
And he further recommends using STDERR even for development.
Now, what I believe to be missing from the picture (at least for me, as non-expert), is the guidance on HOW to access and check the error logs. Okay, we send the errors to STDERR, and then? Where am I going to check the errors next morning? I get it that containerized platforms (clouds, docker etc) have specific tools to easily and nicely monitor logs (tools that intercept STDERR and parse the messages in order to organize them in specific files/DBs), but that's not the case on a simple server, be it a local server or on a hosting.
Therefore, my understanding is that sending errors to STDERR is a good standardization when:
Resorting to using a third-party tool for log monitoring (like ELK, Grail, Sentry, Rollbar etc.)
When knowing exactly where your web-server is storing the STDERR logs. For instance, if you try (I defined a new STD_ERR constant to avoid any pre-configs):
define('STD_ERR', fopen('php://stderr', 'wb'));
fputs(STD_ERR, "ABC error message.");
you can find the "ABC error message" at:
XAMPP Apache default (Windows):
..\xampp\apache\logs\error.log
Symfony5 server (Windows):
C:\Users\your_user\.symfony5\log\ [in the most recent folder, as the logs rotate]
Symfony server (Linux):
/home/your_user/.symfony/log/ [in the most recent folder, as the logs rotate]
For Symfony server, you can actually see the logs paths when starting the server, or by command "symfony server:log".
One immediate advantage is that these STDERR logs are stored outside of the app folders, and you do not need to maintain extra writable folders or deal with the permissions etc. Of course, when developing/hosting multiple sites/apps, you need to configure the error log (the STDERR storage) location per app (in Apache that would be inside each <VirtualHost> conf ; with Symfony server, I am not sure). Personally, without a third-party tool for monitoring logs, I would stick with custom defined log files (no STDERR), but feel free to contradict me.

Getting "Resource not found error" while using Azure File Sync

Facing a very strange issue.
Following this guide https://azure.microsoft.com/en-in/documentation/articles/app-service-mobile-xamarin-forms-blob-storage/ to implement File Sync in Xamarin Forms app.
The Get method in my service (GetUser, default get method in App service controller) is being called thrice & on the 3rd iteration it gives me a 404 resource not found error. First 2 iterations work fine.
This is the client call
await userTable.PullAsync(
null,
userTable.Where(x => x.Email == userEmail), false, new System.Threading.CancellationToken(), null);
If I remove the following line,
// Initialize file sync
this.client.InitializeFileSyncContext(new TodoItemFileSyncHandler(this), store);
then the code works just fine, without any errors.
I will need some time doing a sample project, meanwhile if anyone can shed some light, it will be of help.
Thanks
This won't be an answer, because there isn't enough information to go on. When you get a 404, it's because the backend returned a 404. The ideal situation is:
Turn on Diagnostic Logging in the Azure Portal for your backend
Use Fiddler to monitor the requests
When the request causes a 404, look at what is actually happening
If you are using an ASP.NET backend (and I'm assuming you are because all the File tutorials use ASP.NET), then you can set a breakpoint on the appropriate method in the backend and follow it through. You will need to deploy a debug version of your code.
this is sorted now, eventually I had to give it what it was asking for. I had to create a storage controller for User too, although I don't need one as I don't need to save any files in storage against the users.
I am testing the app further now to see if this sorts my problem completely or I need a storage controller for every entity I use in my app.
In which case it will be really odd as I don't intend to use the storage for all my entities.

MailAddressCollection.Add() only sometimes chokes on periods

We encountered a bug in our production code wherein an email address display name wasn't being properly quoted. This effectively demonstrates the broken code:
var recipients = new MailAddressCollection();
var address = "Mr. Smith <mr.smith#example.com>";
recipients.Add(address);
And, as it probably should, it throws a FormatException in production -- an ASP.NET site. Adding quotes around the display name portion seems to solve the problem in production:
var address = "\"Mr. Smith\" <mr.smith#example.com>";
But in testing, no exception is thrown. And our SMTP abstraction is properly invoked.
Why doesn't the format problem raise an exception during testing / outside the context of IIS?
Addendum: We realized our server is still on .NET 3.5, whereas the test project compiles into .NET 4.0.
Is there any evidence to suggest that the 3.5 version of the method was broken? (Or "different.")
The backstory, for your entertainment: A bug was reported wherein users with periods in their names were not receiving any of their emails (password resets, new product notifications, etc.). Our initial assumption was that this email address parsing step needed additional escaping or quoting. So, my efforts were focused on adding a handful of tests prove that; but, they proved the opposite: They showed that escaping made no difference.
After over a day of adding tests upon tests to find the point at which adding a period of the name mattered, I asked a couple other developers to help me out -- one of them brilliantly implemented the fix regardless of what the test showed, and it seems to have worked.
We still have no idea why the relevant tests doesn't properly fail.

SmartTarget Errors in log file

I don't have any errors with my smart target application, but I do see in the event log, the following error messages:
ERROR 2012-09-19 14:30:09
com.tridion.smarttarget.utils.AmbientDataHelper - can't find defined
trigger-types in claim store (check if your smarttarget cartridge is
up and running)
and:
ERROR 2012-09-19 14:30:11
com.tridion.smarttarget.tags.TimeoutQueryRunner - The fredhopper query
timed out java.util.concurrent.TimeoutException at
java.util.concurrent.FutureTask$Sync.innerGet(Unknown Source) at
java.util.concurrent.FutureTask.get(Unknown Source) at
com.tridion.smarttarget.tags.TimeoutQueryRunner.executeQuery(TimeoutQueryRunner.java:64)
ERROR 2012-09-19 14:30:11
com.tridion.smarttarget.tags.TimeoutQueryRunner - The fredhopper query
timed out
I would really like to understand what is causing these and how I can remove them. Or some suggested steps to help me debug this would be great :)
As I say, everything is working perfectly, later on in the logs I see the query to ST is correct and the results being generated.
In the event that is helps, I'm running on a 2009 implementation with Smart Target 2010, java 1.5.
thanks
John
Sounds like you might have a trigger configured in ST that does not actually exist in the ADF (or is mismatched). Have you looked through your trigger-types.xml file for anything obvious? Have you disabled an ADF cartridge but not removed the corresponding trigger in the XML perhaps? See the documentation for Defining trigger types.
I think your timeout is coming from the SmartTarget region rather than FredHopper. Sometimes a query that isn't already cached in FredHopper can take a while to return, even though it's ultimately successful. The ST query tag has a timeout (defined in the smarttarget_conf.xml file, or over-ridden with a tag attribute) that it will wait for a response from Fredhopper for before resorting to using the fallback content. This might explain why you see later in the logs that the query is correct and that results are returned. See the documentation for <tcdl:query>.
No conclusive answer for you I'm afraid, but I hope that helps.
The first error is logged if your SmartTarget cartridge is not running -- or if the data that it puts into ADF is lost somehow (e.g. you have disabled sessions in your web server).
In that case, SmartTarget will still do a query but it won't include anything from the Ambient Data Framework in it. If you don't have any triggers based on ambient data, the end result is the same for you.
To get rid of the error, make sure that smarttarget_cartridge is configured correctly.
As for the timeout error, it simply means that the query sent to Fredhopper took longer than the configured time. In that case it will show the fallback content instead. If this is happening a lot, you might want to increase the timeout within smarttarget_conf.xml.
I hope you found the issue, but for future reference, the first error message is raised when the claim "taf:claim:ambientdata:definedtriggertypes" is not set by the SmartTarget cartridge. This can be caused by:
SmartTarget cartridge could not load the the trigger types from the SmartTarget server. The log will show an error "can't retrieve list of defined trigger types from FH".
The HTTP session on your web server is expired during an active visit (the HTTP session expired but the browser is still open) and the claim is "lost".
The server does not support sessions like Peter mentioned.

Haskell System.Timeout.timeout crashing when called from certain function

I'm scraping some data from the frontpages of a list of website domains. Some of them are not answering, or are very slow, causing the scraper to halt.
I wanted to solve this by using a timeout. The various HTTP libraries available don't seem to support that, but System.Timeout.timeout seems to do what I need.
Indeed, it seems to work fine when I test the scraping function, but it crashes as soon as I run the enclosing function: (Sorry for bad/ugly code. I'm learning.)
fetchPage domain =
-- Try to read the file from disk.
catch
(System.IO.Strict.readFile $ "page cache/" ++ domain)
(\e -> downloadAndCachePage domain)
downloadAndCachePage domain =
catch
(do
-- Failed, so try to download it.
-- This craches when called by fetchPage, but works fine when called from directly.
maybePage <- timeout 5000000 (simpleHTTP (getRequest ("http://www." ++ domain)) >>= getResponseBody)
let page = fromMaybe "" maybePage
-- This mostly works, but wont timeout if the domain is slow. (lswb.com.cn)
-- page <- (simpleHTTP (getRequest ("http://www." ++ domain)) >>= getResponseBody)
-- Cache it.
writeFile ("page cache/" ++ domain) page
return page)
(\e -> catch
(do
-- Failed, so just fuggeddaboudit.
writeFile ("page cache/" ++ domain) ""
return "")
(\e -> return "")) -- Failed BIG, so just don't give a crap.
downloadAndCachePage works fine with the timeout, when called from the repl, but fetchPage crashes. If I remove the timeout from downloadAndCachePage, fetchPage will work.
Anyone who can explain this, or know an alternative solution?
Your catch handler in fetchPage looks wrong -- it seems you're trying to read a file, and on file not found exception are directly calling into your http function from the exception handler. Don't do this. For complicated reasons, as I recall, code in exception handlers doesn't always behave like normal code -- particularly when it attempts to handle exceptions itself. And indeed, under the covers, timeout uses asynchronous exceptions to kill threads.
In general, you should put as little code as possible in exception handlers, and especially not put code that tries to handle further exceptions (although it is generally fine to reraise a handled exception to "pass it on" [as with bracket]).
That said, even if you're not doing the right thing, a crash (if it is a segfault type crash as opposed to a <<loop>> type crash), even from weird code, is nearly always wrong behavior from GHC, and if you're on GHC 7 then you should consider reporting this.

Resources