nginx writre log to sqlite - nginx

please could you help me
can I let NGINX to write access log directly to sqlite table which I will make the same field as in access.log?
I know I can try to do it with LUA, but I do not know how to put the trigger to nginx to run LUA script on every record in access.log file

You would use the log_by_lua phase to write access logs, as it runs last and allows you to access variables like upstream_response_time, etc.

Related

how to log nginx.workers.fds_count

I'm trying to log the number of file descriptors nginx is using. The docs suggest i can access this data with nginx.workers.fds_count (docs) but that doesn't result in any useful data. How do i access it?

GoReplay - replay from .log instead of .gor

I am looking into GoReplay as to reproduce part of the production traffic that occurred yesterday.
The traffic I want to reproduce has been recorded with nginx, and I can save it as a .log or .csv file.
From what I can tell from the replay http traffic docs it is possible to reproduce traffic using a command like:
sudo gor --input-file request.gor --output-http="http://localhost:3001"
but this requires a .gor file.
My question is, is the reproduction of traffic (using GoReplay) restricted to .gor files, or could I use nginx .log files to do so?
If this is not possible, and given that I don't have a .gor file describing the yesterday requests, would you recommend creating a file conversion script, to convert the log files into .gor files, or can you recommend a better approach?
After asking this question on the GoReplay GitHub page, I got the answer that:
* there is no way to reproduce traffic directly from logs;
* you must use .gor files to recreate the traffic;
Thus, the only way to replay from traffics is to create a .log to .gor file converter.
link to official answer: https://github.com/buger/goreplay/issues/668
I've found that I can use another package to replay the logs I have, as-is, locally. At the same time, you can have goreplay listen for traffic to capture that traffic and save to log files. Then you can run goreplay with those newly created logs, updating the domain and whatever else you need.
Let me know if you want me to provide a step-by-step.

Can the qpad queries be recovered if qpad is closed but the port is open?

I ran multiple queries but before saving them, the qpad crashed. However the q-port on which these queries were running (on my windows machine) is still open. I can recover the variables and functions by \v and \f respectively.
Is there a way to recover all the q statements I ran using qpad? I forgot to maintain a log file, hence I am trying to find a way to recover queries using q-port.
Thanks
Unfortunately there's no way to retrieve your old queries for the reasons Davis.Leong said. But if you can't/don't want to create a table on your server to save them, you can also check the log queries box in QPad settings:
Q > Settings > Editor > Log queries to "queries_date.log"
Now when you run queries, they will be written to this log file in the same directory as QPad.exe, along with the server and timestamp, like this:
/ 02/26/19 09:54:52 on `:localhost:1234:: from QPad1*
show `logthis
/ 02/26/19 10:03:03 on `:localhost:1234:: from QPad1*
a:10
Unfortunately I don't think there is a way to retrieve your command history. Others has already mentioned why so I will not go into that. You can easily maintain a log file in the future however:
When you start your server, adding the -l flag will allow you to define a path to a log file. Any commands sent to the server from the client will now be logged. For example
q ../log/logtest -l -p 5555
t:([]date:`date$();sym:`sym$();price:`float$())
will start a q process listening on 5555, logging any messages that cause the server to update. So if I open a handle to 5555 in another q session h:hopen `::5555
and
update table t
q)h"insert[`t](2000.01.01;`appl;102.3)"
,0
the server will have updated t like so
q)t
date sym price
---------------------
2000.01.01 appl 102.3
There will be a log file created which will show any commands sent to the server. NOTE however it will only log those commands that change the state of the server's data.
This log file can be reloaded in the event of a server crash using the same command as before.
The answer is no. qpad is the GUI that interact with the q process. The reason why you can retrieve the variable and function is because the process did not die. For the query, in default q will not save that, unless when you customize your .z.pg to upsert a record in a queryHistory table.
e.g.
q).z.pg:{[x]`queryHistory insert ([]queryTime:.z.P;query:enlist x)}
q)queryHistory:([]queryTime:`timestamp$();query:())
q)10+10
20
q)testTab:([]sym:10?`1;val:10?100)
q)queryHistory
queryTime query
---------------
queryHistory is not append with record as this is being done in q process itself, if you do it in your qpad:
10+10
testTab:([]sym:10?`1;val:10?100)
you can see there will be record append, so even your GUI is crashed, you can trace the query
q)queryHistory
queryTime query
-------------------------------------
2019.02.26D17:32:38.471063000 "10+10"
q)queryHistory
queryTime query
----------------------------------------------------------------
2019.02.26D17:32:38.471063000 "10+10"
2019.02.26D17:32:52.790863000 "testTab:([]sym:10?`1;val:10?100)"
Got to know recently, there is a backup of your q scripts at "c/users//Appdata/local" and are autosaved every 5-6 mins.These are temporary files which are deleted when you save the script. However if your qPad crashed, you can find your files here :)

Running AWS commands from commandline on a ShellCommandActivity

My original problem was that I want to increase my DynamoDB write throughput before I run the pipeline, and then decrease it when I'm done uploading (doing it max once a day, so I'm fine with the decreasing limitations).
They only way I found to do it is through a shell script that will issue the API commands to alter the throughput. How does it work with my AMI access_key and secret_key when it's a resource that pipeline creates for me? (I can't log in to set the ~/.aws/config file and don't really want to create an AMI just for this).
Should I write the script in bash? can I use ruby/python AWS SDK packages for example? (I prefer the latter..)
How do I pass my credentials to the script? do I have runtime variables (like #startedDate) that I can pass as arguments to the activity with my key and secret? Do I have any other way to authenticate with either the commandline tools or the SDK package?
If there is another way to solve my original problem - please let me know. I've only got to the ShellActivity solution because I couldn't find anything else in documentations/forums.
Thanks!
OK. found it - http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-concepts-roles.html
The resourceRole in the default object in your pipeline will be the one assigned to resources (Ec2Resource) that are created as a part of the pipeline activation.
The default one in configured to have all your permissions and AWS commandline and SDK packages are automatically looking for those credentials so no need to update ~/.aws/config of pass credentials manually.

I don't know how to connect between the site and my database

I'm using SQL server and web developer(C#).
I know I should do something with my connection string, but I don't exactly what and where I should do that.
Can you write me code example
or explain me what to do?
Edit:
I should connect my database with the site(the site is on the internet).
how would i know the right path for this database?
I should put his address, IP or what?
first you have to know that you cannot just add the database to the server like this !!!
you have to script the whole database, and then you have to upload it into your database server like godaddy.com or what ever . your hosting server should support you with these setails ,they gonna give you a user name and password and other details so you can access to the sql manger ( like my ltitladmin) online ....there you have to upload or just copy and execute your code directly so you can make all your tables and stored procedures or what ever ....
after all this all what you have to is just take the online database new connection string and then add it o your web.confg or in you pages ,this is the way how to make it work right.
It depends on your ASP.NET application.
Basically, connection strings could be stored anywere.
One of suggested connection strings' store is Web.config file. Look for "connectionStrings" configuration element and you should find there the one to change for your production server.
Look at this page:
http://www.connectionstrings.com/
You'll find SQL Server connection string examples.
there is a lot os ways to do that, try to take a look to this simple example:
http://www.csharp-station.com/Tutorials/AdoDotNet/Lesson02.aspx
I guess you need to read about it a little:
http://msdn.microsoft.com/en-us/library/ff648340.aspx

Resources