Searching through elmah error log files(Perhaps in 1000's) - asp.net

We have a folder of elmah error logs in XML format. These files will be in millions and each file might be upto 50 kb in size. We need to be able to search on the files(eg: What errors occured, what system failed etc). Do we have a open source system that will index the files and perhaps help us search through the files using keywords? I have looked at Lucene.net but it seems that I will have the code the application.
Please advise.

If you need to have the logs in a folder in XML, elmah-loganalyzer might be of use.
You can also use Microsoft's Log Parser to perform "sql like" queries over the xml files:
LogParser -i:XML "SELECT * FROM *.xml WHERE detail like '%something%'"
EDIT:
You could use a combination of nutch+SOLR or logstash+Elastic Search as an indexing solution.
http://wiki.apache.org/nutch/NutchTutorial
http://lucene.apache.org/solr/tutorial.html
http://blog.building-blocks.com/building-a-search-engine-with-nutch-and-solr-in-10-minutes
http://www.logstash.net/
http://www.elasticsearch.org/tutorials/using-elasticsearch-for-logs/
http://www.javacodegeeks.com/2013/02/your-logs-are-your-data-logstash-elasticsearch.html

We are a couple of developers doing the website http://elmah.io. elmah.io index all your errors (in ElasticSearch) and makes it possible to do funky searches, group errors, hide errors, time filter errors and more. We are currently in beta, but you will get a link to the beta site if you sign up at http://elmah.io.
Unfortunately elmah.io doesn't import your existing error logs. We will open source an implementation of the ELMAH ErrorLog type, which index your errors in your own ElasticSearch (watch https://github.com/elmahio for the project). Again this error logger will not index your existing error logs, but you could implement a parser which runs through your XML files and index everything using our open source error logger. Also you could import the errors directly to elmah.io through our API, if you don't want to implement a new UI on top of ElasticSearch.

Related

What's with all the attempts to access \\replace_with_server_name\PIPE\sql\query in my web app?

I have an ASP.NET web app that uses web forms with Telerik controls and a few other libraries, runs on windows azure, and uses sql azure. I'm using Process Monitor to see what happens on the machine during application startup (trying to diagnose why the initial startup takes about 1min but that's a separate question). I see lots of CreateFile events from the w3wp.exe process hosting my app pool that have Path = \\replace_with_server_name\PIPE\sql\query and Result = BAD NETWORK PATH. Where are these coming from!?
I've searched my source code and don't find replace_with_server_name anywhere. I'll try searching through all the referenced dlls of the solution but does anyone recognise this path and have a suggestion where it might come from?
Note that \\replace_with_server_name\PIPE\sql\query is the exact Path that I see in Process Monitor - I haven't modified it for the purposes of this question. I'm guessing some library I'm using has this value as a default, or something like that.
Update -
I've searched through all the dll and config files in my bin directory, and through all the dlls in the .NET Framework directory and through all the config files in the .NET Framework\Config directory, but haven't found "replace_with_server_name" anywhere. I've also searched various locations like all .dlls in c:\windows\system32, all files in c:\windows\Microsoft.NET, and no luck.
Any ideas on other places I can look? I did some of my searching using HxD editor and then I found PSPad which will do a hex search of multiple files - much quicker. I searched using Windows, ANSI, UTF-8, UTF-16 LE, and UTF-16 BE with no luck ... although it's possible I missed a couple of variations. Surely this text has to be somewhere!?
The CreateFile Win32 API is used to open many streams including named pipes. It looks to me like something in your code is trying to created a named pipe connection to your SQL Server, but the server name specified in the connection string is 'replace_with_server_name' by default.
You might want to take a look at your config file and/or any code that describes the name of the SQL server to which you want to connect and be sure to specify the correct server name.
HTH.

ADODB recordset error "Operation is not allowed when the object is open" (3705)

I have a legacy ASP application that I support. By support I mean that I haven't touched it since about 2005 because its just worked.
However there were a couple of data issues in the Access database that the ASP application uses. So like a fool I opened the database directly over a fileshare (using MS Access 2007), fixed the data and saved it down (in Access 2000 format).
Now the application will retrieve and display the data OK, but any updates fail with the error 3705: Operation is not allowed when the object is open. I have not changed the code in any way, the only change was the data update and database save.
I've found plenty of examples of this error, but they all relate to fairly simple issues like ensuring the recordset is closed before opening it, changing the CursorLocation enum, etc. I've tried most of these in the vain hope that something will work, but nothing has.
Any ideas how can I fix this?
Thanks.
UPDATE
I've installed a web based access database management system, and have tried to compact and repair the database. I received the error:
The Microsoft Jet database engine cannot open the file '<snip>'. It is
already opened exclusively by another user, or you need permission to view
its data. (-2147217911)
I have run the macro detailed here to determine who is logged onto the database, and just showed the admin user (which was me - while running it)
Those errors mean one thing: the database file is opened by some other process and thus is being locked.
Most likely that "web based access database management system" is the culprit, try to find how you can configure it to not lock the file, or get rid of it.
As a work around or way to verify the real problem, you can copy the .mdb file into different location and change the classic ASP connection string to check if you can update the database in its new location.
#Remou's comment above about checking the file and folder permissions was correct.
I had our server admin check the permissions, and it seems that the write access had dropped off the folder (and the files also inherit their permissions from the folder). He said that this has happened before when saving directly over the fileshare.
(accepting in lieu of an answer from #Remou)

Tridion: Binary components do not get deployed when published in bulk

I am using Tridion 5.3.
I have webpage that has over 100 pdf links attached to it. When I publish that page not all pdf get published even though I get a URL for each pdf like "/pdf/xyzpdfname_tcm8-912.pdf". When I click on those links I get a 404 error. For the same pdf components for which I get the error, if I publish them by attaching 5 to 10 pdf at a time they get published and there is no 404 error and everything works fine. But that's not the functionality I need. Does any one know why Tridion is not able to deploy the binary contents if I publish them in bulk?
I am using engine.PublishingContext.RenderedItem.AddBinary(pdfComponent).Url to get the pdf url.
Could this be to do with the naming of your PDF?
Tridion has a mechanism in place to prevent you from accidentally overwriting a binary file, with a different binary file that is named the same.
I can see the Binary you are trying to deploy has the ID:
tcm:8-755-16
and you are naming it as follows:
/www.mysite.com/multimedia/pdfname_tcm8-765.pdf
Using the Variant Id:
variantId=tcm:8-755
is it possible you are also publishing the same binary from a different template? Perhaps with the same filename, but with a different Variant Id?
If so Tridion assumes you are trying to publish two 'Variants' of the same binary (for example a resized image, obviously not relavent for PDFs)
The deployer is therefore throwing an error to prevent you from accidentally overwriting the binary that is published first.
You can get round this in 2 ways:
1> Use the same variant ID for publishing both binaries
2> If you do want to publish a variant, change the filename to something different.
I hope this helps!
Have a look at the log files for your transport service and deployer. If those don't provide clarity, set Cleanup to false in cd_transport_conf.xml, restart the transport service and publish again. Then check if all PDFs ended up in your transport package.
engine.PublishingContext.RenderedItem.AddBinary(pdfComponent).Url gives you the URL of an item as it will be published in case of success, not a guarantee that it will publish.
Pretty sure you're just hitting a maximum size limit on your transport package.
PS - Check the status of your transaction in the publishing queue, might give you a hint
After you updated the question:
There's something terribly wrong with the template and/or your environment. The Published URL says "tcm8-7*6*5.pdf" but the Item Uri is "tcm:8-7*5*5".
Can you double check what's happening in here?

I can't write to a SQLLite database in my AIR app

I am publishing an AIR app in debug mode using FlashDevelop and have included a database in the files/folders to be published.
When the app first launches it checks whether there is an instance of this db in the applicationStorageDirectory, if there isn't it copies the included one from the applicationDirectory to the applicationStorageDirectory.
This should mean that the referenced database dbFile = File.applicationStorageDirectory.resolvePath(DB_FILE_NAME); should now be writable, however when i run the app i can read the records from the table but when i attempt to write using an SQL statement I get an SQLError: 'Error #3122: Attempt to write a readonly database'.
I know that this would be thrown if i was attempting to write to the read only location of the applicationDirectory but i'm certainly using the File.applicationStorageDirectory location which should (as far as i know) be writable.
The location of the db on my Windows 7, 64bit = C:\Users\sean.duffy\AppData\Roaming\FishFightAppData\Local Store\db this is found using the dbFile.nativePath property so again i'm sure i should be able to update the db.
Anyone got any ideas? I have tried everything i could think of and searched all over but the only common cause seems to be when people try to write to the asplicationDirectory and not the storage directory....
UPDATE::
My bad - have just realised that i've misread the API of the 3rd party library i'm using! I should have been calling executeModify(statement) which can modify the contents of the db, instead i'm calling execute(statement) which doesn't/can't overwrite the db.
The source code is compiled into a swc and there was no documentation to point out you needed to use executeModify, although i should have guessed from the name i suppose!
Sorry about that and thanks for your help
(As a public courtesy to get this off the unanswered list, I am reposting the apparent solution. As usual, the asker is more than welcome to ignore mine and post it themselves and accept their own answer.)
In this API, you need to call executeModify(statement), not execute(statement). The latter does not overwrite the database.

How do I get site usage from IIS?

I want to build a list of User-Url
How can I do that ?
By default, IIS creates log files in the system32\LogFiles directory of your Windows folder. Each website has its own folder beginning in “W3SVC” then incrementing sequentially from there (i.e. “W3SVC1”, “W3SVC2” etc). In there you’ll find a series of log files containing details of each request to your website.
To analyse the files, you can either parse them manually (i.e. suck them into SQL Server and query them) or use a tool like WebTrends Log Analyser. Having said that, if you really want to track website usage you might be better off taking a look at Google Analytics. Much simpler to use without dealing with large volumes of log files or paying heft license fees.
if you have any means of identifying your users via web server logs (e.g. username in the cookie) then you can do it by parsing your web logs and getting info from csUriquery and csCookie fields.
Alternatively you can rely on external tracking systems (e.g Omniture)
I ended up finding the log files in C:\inetpub\logs\LogFiles.
I used Log Parser Studio from Microsoft to parse the data. It has lots of documentation on how to query iis log files, including sample querys.

Resources