I have a client who has set-up a testing environment in some AI language. It basically runs some predefined test cases and stores the results in as log files (comma separated txt files). My job is to identify and suggest a reporting system and I have these options in mind. either
1. Importing the logs into MSSQL and use the reporting(SSRS) it uses
2. or us import the logs to MySQL and use PHP to develop custom reporting.
I am thinking that going with option2 is better. The reason for this is, the logs are inconsistent and contain unexpected wild characters that normally DB's don't accept. So, I can write some scripts in php before loading them to the database.
Can anyone please suggest if this is your problem what will you suggest to do?
It depends how fancy you need to be. If the data is in CSV files, you could even go so simple as to load it into Excel (or their favorite spreadsheet tool), and use spreadsheet macros to analyze it.
Related
I'm working on a project that has some pretty specific requirements, and am running in to a problem with one of them. We have a locked SQLite database. We can't unlock this database, but need to read it (but not write to it), and we cannot create any new files on the filesystem that contain the data from this database. What was suggested is to read the file in to RAM, and then access it from there. I've been trying to find out a way to do this, but this project is on Windows, so it's not going as smoothly as it might otherwise.
What I've been trying to do is read the file in to a bash variable, and then pass that variable to sqlite as the database. This hasn't been working particularly well.
I installed win-bash, but when I do "sqlite3.exe <(cat <<<"$database")" I get a 'syntax error near unexpected token `<('" I checked, and win-bash looks like it's based on an older version of bash. I tried zsh, but it's saying "doesn't look like your system supports FIFOs.". I installed cygwin, which wouldn't really be a good solution anyway (once I figure out how to do this, I need to pass it off to our Qt developers so that they can roll it in to a Qt application) but I was just trying to do a 'proof of concept' - that didn't work either. Sqlite opened just fine, but when i ran ".tables", it said "Error: unable to open database "/dev/fd/63": unable to open database file" So, it looks like I'm barking up the wrong tree, and need to think of some other way to do this.
I guess my questions are, first, is it possible to read a sqlite database in a variable as I was attempting, or am I going down an entirely incorrect path there? Second - if it can't be done that way, is there some way I'm overlooking that might make this possible?
Thanks!
I am writing a PLSQL procedure that takes input as an excel file through front end and using that excel input the procedure inserts , updates or deletes the records present in an existing table . Can anyone show me the approach for this?
If that "Excel" file has to be really in native XLS(X) format, a simple option - if you want to stay within Oracle boundaries - is an Apex application which offers a data loading wizard. Takes 4 pages to create it (don't worry, Apex Wizard creates almost everything for you). Once the loading is over, a (stored) procedure can do the rest of processing (you'd call it by pushing a button).
Alternatively, if you save contents of that file as a CSV file, you can load it with SQL*Loader, utility ran at the operating system command prompt. You'd have to create a control file (no wizard to do that, I'm afraid). This approach probably isn't convenient for end users (who's going to type anything at the command prompt?) so you'd have to create some kind of an application to do that.
Or, CSV again, but this time used as an external table. This approach requires the file to be located in a directory accessible by the database server (most frequently, the directory is located on that computer, and you most frequently don't want to allow access to anyone to it). Its advantage is that you can access the CSV file directly from (PL/)SQL, fetch data from it, perform various adjustments etc.
If you're capable of writing programs that aren't part of the Oracle niche (I'm not), go for it (but I can't suggest anything; someone else might).
I was trying to do the analysis of weblog files by R. I am comfortable to deal with the date and bytes, wherever numeric data is present but fail to deal with the strings.
From the log file (log file in CSV format), I want to find out the particular user (with help of IP and Agents) and its total spending on the web page.
There are numurous libraries to do this kind of analysis, although I could find none in R. A google for parse apache logfile yielded a library in Perl, and python parse apache logfile yields the Scratchy library. Both rely on regular expressions to parse the contents of the file.
From here there are two ways to deal with the apache logfile:
Call perl or python from R, either using a direct link, or using a system call (this is simpler).
Take the idea from the perl or python lib and use it to implement R versions of the functions. This will take a lot of time.
You refer to a csv file, but I think the libraries above work with the original text file with the Apache log, so I'd use those, and not your csv file.
In addition, this SO post mentions an answer by #doug (profile) where he states that he has created some functions to create visualizations of apache logfile data, parsed by Python. Maybe you could send him a message or mail and see if he is willing to share the code.
Logfile analysis in R is an interesting topic we had before, you can find our discussion right here. Maybe this discussion might also help you to adjust to the SO etiquette in order to get better feedback (not to take anything away from yours, Paul).
I want to build a list of User-Url
How can I do that ?
By default, IIS creates log files in the system32\LogFiles directory of your Windows folder. Each website has its own folder beginning in “W3SVC” then incrementing sequentially from there (i.e. “W3SVC1”, “W3SVC2” etc). In there you’ll find a series of log files containing details of each request to your website.
To analyse the files, you can either parse them manually (i.e. suck them into SQL Server and query them) or use a tool like WebTrends Log Analyser. Having said that, if you really want to track website usage you might be better off taking a look at Google Analytics. Much simpler to use without dealing with large volumes of log files or paying heft license fees.
if you have any means of identifying your users via web server logs (e.g. username in the cookie) then you can do it by parsing your web logs and getting info from csUriquery and csCookie fields.
Alternatively you can rely on external tracking systems (e.g Omniture)
I ended up finding the log files in C:\inetpub\logs\LogFiles.
I used Log Parser Studio from Microsoft to parse the data. It has lots of documentation on how to query iis log files, including sample querys.
I need to download two Excel files onto the client, and then run a (diff) executable against them. I know how to download a single Excel file, from
here. But how to download a second one automatically in succession? And then how to run a batch command on them? Is this even realistic? Any guidance or pointers would be greatly appreciated.
Thanks,
Mike
To download multiple files at once you have two main options:
1) Just open multiple windows to your page generation script to download multiple files as per http://www.webdeveloper.com/forum/showpost.php?s=b4f6b25edeb6b7ea55434c4685a675fe&p=950225&postcount=6
2) Archive the files into a package (zip/arj/7z etc..) and send the archive to the client.
eg. http://www.motobit.com/tips/detpg_multiple-files-one-request/
As for doing the diff client-side that is a lot more tricky as Shhnap has already mentioned. If you are doing this for a controlled client base you may be able to get them to allow permissions for an ActiveX script that runs something client side. (Or fire off a console application) - but if you don't have fine control over the client environment then i can't think of a way to do it.
As Shhnap suggested can you not just do the comparison server-side (and then send this to the client as a third file?)
Well, just some pointers because I'm not sure I completely understand the problem. You a user to be given two downloads at the same time and then run a diff command against those two files? On the server or the client i'm not sure? You'll have alot of problems automating the client side version because forcing people to run client side code is usually frowned upon by virus protection software.
The server side diff sounds exactly like a CGI moment to me: http://www.cs.tut.fi/~jkorpela/perl/cgi.html. That will allow you to generate a web-page that shows the diff between the two. CGI allows you to run programs on your server and display their output in a webpage; that's the simple explanation.
If that was not quite what you wanted then feel free to give me a comment and i'll try and edit to answer correctly.