I want all the records till the end of the file which are greater than given date in unix - unix

I want all the records till the end of the log file which are greater than given date...
suppose given date is Mon Dec 14 22:00:03 2015 then i want all the lines in the log file till end when first occurrence of greater than this date is found.
ex: awk ' { if ( $0 > "Tue Dec 15 08:00:00 2015") print } ' file.out
only prints lines greater than date but i want all lines after match till end of file..
please note that
1. i can not use regex since i dont know if entry is present in the log file for that date i.e H:M:S so i have to use greater than date functionality.
Date is not present on every line of the lg file. it is present in between but on new line
please help
sample log file:::
Mon Dec 14 02:00:00 2015
Clearing Resource Manager plan via parameter
Mon Dec 14 07:02:57 2015
***********************************************************************
Fatal NI connect error 12504, connecting to:
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=)(CID=(PROGRAM=oracle)(HOST=ltest8)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.115)(PORT=1521)))
VERSION INFORMATION:
TNS for Linux: Version 11.1.0.6.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.1.0.6.0 - Production
Time: 14-DEC-2015 07:02:57
Tracing not turned on.
Tns error struct:
ns main err code: 12564
TNS-12564: TNS:connection refused
ns secondary err code: 0
nt main err code: 0
nt secondary err code: 0
nt OS err code: 0
Mon Dec 14 08:01:37 2015
***********************************************************************
Fatal NI connect error 12504, connecting to:
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=)(CID=(PROGRAM=oracle)(HOST=ltest8)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.115)(PORT=1521)))
VERSION INFORMATION:
TNS for Linux: Version 11.1.0.6.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.1.0.6.0 - Production
Time: 14-DEC-2015 08:01:37
Tracing not turned on.
Tns error struct:
ns main err code: 12564
TNS-12564: TNS:connection refused
ns secondary err code: 0
nt main err code: 0
nt secondary err code: 0
nt OS err code: 0
Mon Dec 14 08:54:33 2015
***********************************************************************
Fatal NI connect error 12504, connecting to:
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=)(CID=(PROGRAM=oracle)(HOST=ltest8)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.115)(PORT=1521)))
VERSION INFORMATION:
TNS for Linux: Version 11.1.0.6.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.1.0.6.0 - Production
Time: 14-DEC-2015 08:54:33
Tracing not turned on.
Tns error struct:
ns main err code: 12564
TNS-12564: TNS:connection refused
ns secondary err code: 0
nt main err code: 0
nt secondary err code: 0
nt OS err code: 0
Mon Dec 14 08:57:18 2015
Thread 1 advanced to log sequence 232
Current log# 2 seq# 232 mem# 0: /u04/app/oracle/oradata/kcom/redo02.log
Mon Dec 14 08:57:19 2015
Errors in file /u01/app/oracle/diag/rdbms/kcom/Rialto/trace/Rialto_arc3_3953.trc:
ORA-19815: WARNING: db_recovery_file_dest_size of 268435456 bytes is 100.00% used, and has 0 remaining bytes available.
************************************************************************
You have following choices to free up space from flash recovery area:
1. Consider changing RMAN RETENTION POLICY. If you are using Data Guard,
then consider changing RMAN ARCHIVELOG DELETION POLICY.
2. Back up files to tertiary device such as tape using RMAN
BACKUP RECOVERY AREA command.
3. Add disk space and increase db_recovery_file_dest_size parameter to
reflect the new space.
4. Delete unnecessary files using RMAN DELETE command. If an operating
system command was used to delete files, then use RMAN CROSSCHECK and
DELETE EXPIRED commands.
************************************************************************
Mon Dec 14 09:17:45 2015
***********************************************************************
Fatal NI connect error 12504, connecting to:
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=)(CID=(PROGRAM=oracle)(HOST=ltest8)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.115)(PORT=1521)))
VERSION INFORMATION:
TNS for Linux: Version 11.1.0.6.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.1.0.6.0 - Production
Time: 14-DEC-2015 09:17:45
Tracing not turned on.
Tns error struct:
ns main err code: 12564
TNS-12564: TNS:connection refused
ns secondary err code: 0
nt main err code: 0
nt secondary err code: 0
nt OS err code: 0
Mon Dec 14 10:25:24 2015
QKSRC: ViewText[ecode=942] = SELECT /*+ result_cache */ ID, 'PLUGIN_'||NAME AS NAME, STANDARD_ATTRIBUTES, SQL_MIN_COLUMN_COUNT, NVL(SQL_MAX_COLUMN_COUNT, 999) AS SQL_MAX_COLUMN_COUNT, SQL_EXAMPLES FROM WWV_FLOW_PLUGINS WHERE FLOW_ID = :B2 AND PLUGIN_TYPE = :B1
Mon Dec 14 14:31:14 2015
Thread 1 advanced to log sequence 233
Current log# 3 seq# 233 mem# 0: /u04/app/oracle/oradata/kcom/redo03.log
Mon Dec 14 14:31:15 2015
Errors in file /u01/app/oracle/diag/rdbms/kcom/Rialto/trace/Rialto_arc0_3947.trc:
ORA-19815: WARNING: db_recovery_file_dest_size of 268435456 bytes is 100.00% used, and has 0 remaining bytes available.
************************************************************************
You have following choices to free up space from flash recovery area:
1. Consider changing RMAN RETENTION POLICY. If you are using Data Guard,
then consider changing RMAN ARCHIVELOG DELETION POLICY.
2. Back up files to tertiary device such as tape using RMAN
BACKUP RECOVERY AREA command.
3. Add disk space and increase db_recovery_file_dest_size parameter to
reflect the new space.
4. Delete unnecessary files using RMAN DELETE command. If an operating
system command was used to delete files, then use RMAN CROSSCHECK and
DELETE EXPIRED commands.
************************************************************************
Mon Dec 14 20:28:23 2015
Thread 1 advanced to log sequence 234
Current log# 4 seq# 234 mem# 0: /u04/app/oracle/oradata/kcom/redo04.log
Mon Dec 14 20:28:24 2015
Errors in file /u01/app/oracle/diag/rdbms/kcom/Rialto/trace/Rialto_arc1_3949.trc:
ORA-19815: WARNING: db_recovery_file_dest_size of 268435456 bytes is 100.00% used, and has 0 remaining bytes available.
************************************************************************
You have following choices to free up space from flash recovery area:
1. Consider changing RMAN RETENTION POLICY. If you are using Data Guard,
then consider changing RMAN ARCHIVELOG DELETION POLICY.
2. Back up files to tertiary device such as tape using RMAN
BACKUP RECOVERY AREA command.
3. Add disk space and increase db_recovery_file_dest_size parameter to
reflect the new space.
4. Delete unnecessary files using RMAN DELETE command. If an operating
system command was used to delete files, then use RMAN CROSSCHECK and
DELETE EXPIRED commands.
************************************************************************
Mon Dec 14 22:00:00 2015
Setting Resource Manager plan SCHEDULER[0x2C09]:DEFAULT_MAINTENANCE_PLAN via scheduler window
Setting Resource Manager plan DEFAULT_MAINTENANCE_PLAN via parameter
Mon Dec 14 22:00:03 2015
Mon Dec 14 22:00:03 2015
Logminer Bld: Lockdown Complete. DB_TXN_SCN is UnwindToSCN (LockdownSCN) is 18957974
Tue Dec 15 02:00:00 2015
Clearing Resource Manager plan via parameter
Tue Dec 15 02:00:02 2015

Simple python 2 parser with hardcoded datetime and input file. Tested with your given log. It has plenty of room for optimization but it works as a start I guess.
#/usr/bin/env python
import re
from datetime import datetime
# year, month, day, hour, minute
filter_from = datetime(2015, 12, 13, 23, 40)
with open("tmp.log") as log:
match = False
for line in log:
if (match):
print line
else:
candidate = re.match(r'[A-Z][a-z][a-z] [A-Z][a-z][a-z] \d\d \d\d:\d\d:\d\d \d\d\d\d', line)
if candidate:
parsed_date = datetime.strptime(candidate.group(0), "%a %b %d %X %Y")
if parsed_date > filter_from:
match = True
print line

Related

Why is this mercurial patch not moving the file?

I've used TortoiseHG to export a patch that moves a file to another place. This is the content of the patch :
# HG changeset patch
# User Arthur Attout <arthur.attout#outlook.com>
# Date 1551095974 -3600
# Mon Feb 25 12:59:34 2019 +0100
# Branch CBLS
# Node ID f73e7c88dbcf6de3091e1edc9360336d1c699038
# Parent 863386a2a66de9cdd6d8885912988cb4b862eef0
Unit tests + migrate unit tests int -> long
diff -r 863386a2a66d -r f73e7c88dbcf oscar-cbls/src/test/scala/oscar/cbls/test/invariants/InvariantTests.scala
--- a/oscar-cbls/src/test/scala/oscar/cbls/test/invariants/InvariantTests.scala Fri Feb 22 14:30:14 2019 +0100
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
## -1,948 +0,0 ##
-package oscar.cbls.test.invariants
-
- Thousands of lines ...
-
diff -r 863386a2a66d -r f73e7c88dbcf oscar-cbls/src/test/scala/oscar/cbls/test/unit/InvariantTests.scala
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/oscar-cbls/src/test/scala/oscar/cbls/test/unit/InvariantTests.scala Mon Feb 25 12:59:34 2019 +0100
## -0,0 +1,877 ##
+package oscar.cbls.test.unit
+ Same thousands of lines ...
+
This is the only thing the patches do. It moves the file InvariantTests from invariants to unit.
When I import the patch in tortoiseHG it gives the following error
Hunk #1 FAILED at 0
1 out of 1 hunks FAILED -- saving rejects to file oscar-cbls/src/test/scala/oscar/cbls/test/invariants/InvariantTests.scala.rej
patching file oscar-cbls/src/test/scala/oscar/cbls/test/unit/InvariantTests.scala
adding oscar-cbls/src/test/scala/oscar/cbls/test/unit/InvariantTests.scala
abandon : patch failed to apply
[command returned code 255 Mon Apr 15 19:55:02 2019]
After that, this is the content of my working directory
The file is not moved.
What does this 255 error mean ? Why isn't the patch simply applying an moving the file properly ?
Mercurial doc clearly states:
Mercurial's default format for showing changes between two versions of
a file is compatible with the unified format of GNU diff, which can be
used by GNU patch and many other standard tools.
While this standard format is often enough, it does not encode the following information:
executable status and other permission bits
copy or rename information
changes in binary files creation or deletion of empty files
In order to have this data in patch, you have to create git-compatible patch with -git (AFAICR) option, also available in THG GUI

NSS+Pam+Tacacs+ firs session fails

I have device that i want to autorize to using TACACS+ server.
I have TACACS version: tac_plus version F4.0.4.26
I have tacacs server with next configuration
accounting file = /var/log/tac_plus.acct
key = testing123
default authentication = file /etc/passwd
user = sf {
default service = permit
login = cleartext 1234
}
user = DEFAULT {
# login = PAM
service = ppp protocol = ip {}
}
on device i have NSS with config:
/etc/nsswitch.conf
passwd: files rf
group: files
shadow: files
hosts: files dns
networks: files dns
protocols: files
services: files
ethers: files
rpc: files
and pam.d with sshd file in it
# SERVER 1
auth required /lib/security/pam_rf.so
auth [success=done auth_err=die default=ignore] /lib/security/pam_tacplus.so server=172.18.177.162:49 secret=testing123 timeout=5
account sufficient /lib/security/pam_tacplus.so server=172.18.177.162:49 service=ppp protocol=ip timeout=5
session required /lib/security/pam_rf.so
session sufficient /lib/security/pam_tacplus.so server=172.18.177.162:49 service=ppp protocol=ip timeout=5
password required /lib/security/pam_rf.so
# PAM configuration for the Secure Shell service
# Standard Un*x authentication.
auth include common-auth
# Disallow non-root logins when /etc/nologin exists.
account required pam_nologin.so
# Standard Un*x authorization.
account include common-account
# Set the loginuid process attribute.
session required pam_loginuid.so
# Standard Un*x session setup and teardown.
session include common-session
# Standard Un*x password updating.
password include common-password
and the problem, while i connect to device first time vie TeraTerm, i see that inputed user name was added in session start to /etc/passwd and /etc/shadow
but logging not succeed and in tacacs server i see in logs
Mon Dec 17 19:00:05 2018 [25418]: session.peerip is 172.17.236.2
Mon Dec 17 19:00:05 2018 [25418]: forked 5385
Mon Dec 17 19:00:05 2018 [5385]: connect from 172.17.236.2 [172.17.236.2]
Mon Dec 17 19:00:05 2018 [5385]: Found entry for alex in shadow file
Mon Dec 17 19:00:05 2018 [5385]: verify
IN $6$DUikjB1i$4.cM87/pWRZg2lW3gr3TZorAReVL7JlKGA/2.BRi7AAyHQHz6bBenUxGXsrpzXkVvpwp0CrtNYAGdQDYT2gaZ/
Mon Dec 17 19:00:05 2018 [5385]:
IN encrypts to $6$DUikjB1i$AM/ZEXg6UAoKGrFQOzHC6/BpkK0Rw4JSmgqAc.xJ9S/Q7n8.bT/Ks73SgLdtMUAGbLAiD9wnlYlb84YGujaPS/
Mon Dec 17 19:00:05 2018 [5385]: Password is incorrect
Mon Dec 17 19:00:05 2018 [5385]: Authenticating ACLs for user 'DEFAULT' instead of 'alex'
Mon Dec 17 19:00:05 2018 [5385]: pap-login query for 'alex' ssh from 172.17.236.2 rejected
Mon Dec 17 19:00:05 2018 [5385]: login failure: alex 172.17.236.2 (172.17.236.2) ssh
after that if i close TeraTerm and opening it again and trying to connect, connection established successfully, after that if i close TeraTerm and open again, the same problem appears each seccond try.
what may be a problem with it, i am driving crazy already
after deeply discovering problem, i fount out that iit was my fault, i compiled my name service using g++ instead of gcc.
Because of name service using
#include <pwd.h>
that defines interface for functions like nss_service_getpwnam_r and others, that was written in C, therefore i was must to:
extern "C" {
#include <pwd.h>
}
or to compile my program using GCC, hope in once someone will face same problem it will help him / her. good luck

Shiny server Connection closed. Info: {"type":"close","code":4503,"reason":"The application unexpectedly exited","wasClean":true}

I've encountered a problem with deploying my shiny app on linux Ubuntu 16.04 LTS.
After I run sudo systemctl start shiny-server, and open up my browser heading to http://192.168..*:3838/StockVis/, the web page greys out in a second.
I found some warnings in the web console as below, and survey some information on the web for like two weeks, but still have no solution. :(
***"Thu Feb 16 2017 12:20:49 GMT+0800 (CST) [INF]: Connection opened. http://192.168.**.***:3838/StockVis/"
Thu Feb 16 2017 12:20:49 GMT+0800 (CST) [DBG]: Open channel 0
The application unexpectedly exited.
Diagnostic information is private. Please ask your system admin for permission if you need to check the R logs.
**Thu Feb 16 2017 12:20:50 GMT+0800 (CST) [INF]: Connection closed. Info: {"type":"close","code":4503,"reason":"The application unexpectedly exited","wasClean":true}
Thu Feb 16 2017 12:20:50 GMT+0800 (CST) [DBG]: SockJS connection closed
Thu Feb 16 2017 12:20:50 GMT+0800 (CST) [DBG]: Channel 0 is closed
Thu Feb 16 2017 12:20:50 GMT+0800 (CST) [DBG]: Removed channel 0, 0 left*****
Please kindly give some suggestions to move on.
This can indicate something in your R code is causing an error. As that R error could be anything, this answer is to help you gather that info. The browser console messages will not tell you what that is. In order to access the error, you need to configure Shiny to not delete the log upon exiting the application.
Assuming you have sudo access:
$ sudo vi /etc/shiny-server/shiny-server.conf
Place the following line in the file after run_as shiny; :
preserve_logs true;
Restart shiny:
sudo systemctl restart shiny-server
Reload your Shiny app.
In the var/log/shiny-sever/ directory there will be a log file with your application name. Viewing that file will give you more information on what is going on.
Warning. After you are done, take out the preserve_logs true; line in the conf file and restart Shiny. If not, you will start generating a bunch of log files you don't want.

jMeter Distributed Testing: Master won't shut down

I have a simple 4 server setup running jMeter (3 slaves, 1 master):
Slave 1: 10.135.62.18 running ./jmeter-server -Djava.rmi.server.hostname=10.135.62.18
Slave 2: 10.135.62.22 running ./jmeter-server -Djava.rmi.server.hostname=10.135.62.22
Slave 3: 10.135.62.20 running ./jmeter-server -Djava.rmi.server.hostname=10.135.62.20
Master: 10.135.62.11 with remote_hosts=10.135.62.18,10.135.62.22,10.135.62.20
I start the test with ./jmeter -n -t /root/jmeter/simple.jmx -l /root/jmeter/result.jtl -r
With the following output:
Writing log file to: /root/apache-jmeter-3.0/bin/jmeter.log
Creating summariser <summary>
Created the tree successfully using /root/jmeter/simple.jmx
Configuring remote engine: 10.135.62.18
Configuring remote engine: 10.135.62.22
Configuring remote engine: 10.135.62.20
Starting remote engines
Starting the test # Mon Aug 29 11:22:38 UTC 2016 (1472469758410)
Remote engines have been started
Waiting for possible Shutdown/StopTestNow/Heapdump message on port 4445
The Slaves print:
Starting the test on host 10.135.62.22 # Mon Aug 29 11:22:39 UTC 2016 (1472469759257)
Finished the test on host 10.135.62.22 # Mon Aug 29 11:22:54 UTC 2016 (1472469774871)
Starting the test on host 10.135.62.18 # Mon Aug 29 11:22:39 UTC 2016 (1472469759519)
Finished the test on host 10.135.62.18 # Mon Aug 29 11:22:57 UTC 2016 (1472469777173)
Starting the test on host 10.135.62.20 # Mon Aug 29 11:22:39 UTC 2016 (1472469759775)
Finished the test on host 10.135.62.20 # Mon Aug 29 11:22:56 UTC 2016 (1472469776670)
Unfortunately the master waits for messages on port 4445 indefinitely event though all slaves finished the test.
Is there anything I have missed?
I figured it out myself just before submitting the question. I guess the solution could be useful nonetheless:
Once I start the test (on the main server) with this:
./jmeter -n -t /root/jmeter/simple.jmx -l /root/jmeter/result.jtl -r -Djava.rmi.server.hostname=10.135.62.11 -Dclient.rmi.localport=4001
It works just fine. I wonder why the documentation doesn't mention something like this.

Interesting variation of the TNSPING Message 3511 error

I have the following problem with Oracle 11g TNSPING utility: fresh 11g client installation, ORACLE_HOME set up properly, TNS_ADMIN also set, msb files available in desired location, readable by the user that invokes TNSPING. Connection to the database works, sqlplus can be used to connect. TNSPING, however, fails badly. See attached log:
Microsoft Windows [Version 6.1.7600]
Copyright (c) 2009 Microsoft Corporation. All rights reserved.
C:\Users\USERNAME>set ORACLE_HOME=C:\Work\Software\Oracle\product\11.2.0\client_1
C:\Users\USERNAME>set TNS_ADMIN=%ORACLE_HOME%\network\admin
C:\Users\USERNAME>set NLS_LANG=ENGLISH_POLAND.EE8MSWIN1250
C:\Users\USERNAME>dir %TNS_ADMIN%\tnsnames.ora
Volume in drive C is XXX
Volume Serial Number is YYY
Directory of C:\Work\Software\Oracle\product\11.2.0\client_1\network\admin
2013-03-19 12:26 358 tnsnames.ora
1 File(s) 358 bytes
0 Dir(s) 59 082 616 832 bytes free
C:\Users\USERNAME>dir %ORACLE_HOME%\network\mesg\tns*.msb
Volume in drive C is XXX
Volume Serial Number is YYY
Directory of C:\Work\Software\Oracle\product\11.2.0\client_1\network\mesg
2010-03-31 07:01 53 248 tnspl.msb
2010-03-31 07:01 47 104 tnsus.msb
2 File(s) 100 352 bytes
0 Dir(s) 59 082 616 832 bytes free
C:\Users\USERNAME>sqlplus <USER>/<PASSWORD>#<DATABASE>
SQL*Plus: Release 11.2.0.1.0 Production on Tue Mar 19 14:36:34 2013
Copyright (c) 1982, 2010, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Release 11.2.0.2.0 - 64bit Production
SQL> exit
Disconnected from Oracle Database 11g Release 11.2.0.2.0 - 64bit Production
C:\Users\USERNAME>tnsping <DATABASE>
TNS Ping Utility for 32-bit Windows: Version 11.2.0.1.0 - Production on 19-MAR-2013 14:37:52
Copyright (c) 1997, 2010, Oracle. All rights reserved.
Message 3511 not found; No message file for product=NETWORK, facility=TNSMessage 3512 not found; No message file for product=NETWORK, facility=TNSMessage 3513 not found; No message file for product=NETWORK, facility=TNSMessage 3509 not found; No message file for product=NETWORK, facility=TNS
Please note: I had to replace all the user credentials, database name, etc., due to security considerations.
Can you help me, please, troubleshoot the issue? I have read all the other StackExchange topics related to TNSPING failure, but each of the solutions (set up ORACLE_HOME, reinstall, make sure registry points to proper home, check msb files) has failed me...
Thanks in advance!

Resources