How do I set logging to debug level - r

By default, the package logging only processes messages at level INFO. Now I want to log DEBUG messages, too. From the tutorial I adapted the following snippet:
library(logging)
logReset()
addHandler(writeToConsole)
setLevel("DEBUG", getHandler("writeToConsole"))
getHandler("writeToConsole")[["level"]]==loglevels["DEBUG"]
#TRUE
loginfo("this is an info")
#2018-06-15 13:04:40 INFO::this is an info
logdebug("this is a debug note.")
# nothing happens
What am I doing wrong?

In the context of logging package, the criticality levels are set for both, for the logger and for the handler objects. In your code you use global logger and a custom handler, where the minimal level of the logger conditions the handling levels is 'INFO'.
After logReset() you reset the root logger and remove all default handlers:
logReset()
getLogger()[['level']] # accessing the root logger
# INFO
# 20
with(getLogger(logger=''), names(handlers))
# character(0)
Hence your code:
library(logging)
logReset()
addHandler(writeToConsole)
setLevel("DEBUG", getHandler("writeToConsole"))
getHandler("writeToConsole")[["level"]]==loglevels["DEBUG"]
#TRUE
loginfo("this is an info")
# 2021-11-11 17:35:25 INFO::this is an info
logdebug("this is a debug note.")
# nothing
can be adjusted as follows (run after execution of your script):
getLogger()[['level']]
# INFO
# 20
setLevel('DEBUG') # or setLevel('DEBUG', container = '') where it is clear that the root logger is being accessed
getLogger()[['level']]
# DEBUG
# 10
logdebug("this is a debug note.")
# 2021-11-11 17:36:55 DEBUG::this is a debug note.
Another simpler way is to use basicConfig() for the root logger setting it like:
logReset()
basicConfig('DEBUG')
loginfo("this is an info")
# 2021-11-11 17:37:43 INFO::this is an info
logdebug("this is a debug note")
# 2021-11-11 17:37:43 DEBUG::this is a debug note
or yet another option, with dedicated logger object:
logReset()
addHandler(writeToConsole, logger='my_logger')
lrc <- getLogger('my_logger')
lrc$level
# NOTSET
# 0
lrc$setLevel('DEBUG')
lrc$level
# DEBUG
# 10
setLevel('INFO', lrc$handlers$writeToConsole)
lrc$debug("test")
# nothing
setLevel('DEBUG', lrc$handlers$writeToConsole)
lrc$debug("test")
# 2021-11-11 17:38:32 DEBUG:my_logger:test

Related

Sorry i'm stuck with freeraduis to LDAP

Ok Im a little stuck with FreeRad A little lost
I think I have found the problem, I just don't understand why
so If I try to auth over the wifi to rad it looks like its not getting the password below is the debug of that
Ready to process requests
(0) Received Access-Request Id 149 from 192.168.200.238:49881 to 192.168.20.2:1812 length 227
(0) User-Name = "testing"
(0) NAS-IP-Address = 192.168.200.238
(0) NAS-Identifier = "d221f94b63df"
(0) Called-Station-Id = "D2-21-F9-4B-63-DF:test no join"
(0) NAS-Port-Type = Wireless-802.11
(0) Service-Type = Framed-User
(0) Calling-Station-Id = "D2-5A-22-F3-F6-A1"
(0) Connect-Info = "CONNECT 0Mbps 802.11a"
(0) Acct-Session-Id = "08DE2818B2804F38"
(0) Acct-Multi-Session-Id = "47EF77EBC7B5BF7A"
(0) WLAN-Pairwise-Cipher = 1027076
(0) WLAN-Group-Cipher = 1027076
(0) WLAN-AKM-Suite = 1027073
(0) Framed-MTU = 1400
(0) EAP-Message = 0x02bd000c0174657374696e67
(0) Message-Authenticator = 0xcef6985af177d3099edb44dbcfaba6e7
(0) # Executing section authorize from file /etc/freeradius/3.0/sites-enabled/my_server
(0) authorize {
rlm_ldap (ldap): Reserved connection (0)
(0) ldap: EXPAND (cn=%{%{Stripped-User-Name}:-%{User-Name}})
(0) ldap: --> (cn=testing)
(0) ldap: Performing search in "ou=users,dc=ldap,DC=alexosaurous,DC=co,DC=nz" with filter "(cn=testing)", scope "sub"
(0) ldap: Waiting for search result...
(0) ldap: User object found at DN "cn=testing,ou=users,dc=ldap,dc=alexosaurous,dc=co,dc=nz"
(0) ldap: Processing user attributes
(0) ldap: WARNING: No "known good" password added. Ensure the admin user has permission to read the password attribute
(0) ldap: WARNING: PAP authentication will *NOT* work with Active Directory (if that is what you were trying to configure)
rlm_ldap (ldap): Released connection (0)
(0) [ldap] = ok
(0) if ((ok || updated) && User-Password) {
(0) if ((ok || updated) && User-Password) -> FALSE
(0) } # authorize = ok
(0) ERROR: No Auth-Type found: rejecting the user via Post-Auth-Type = Reject
(0) Failed to authenticate the user
(0) Using Post-Auth-Type Reject
(0) Post-Auth-Type sub-section not found. Ignoring.
(0) Delaying response for 1.000000 seconds
Waking up in 0.3 seconds.
Waking up in 0.6 seconds.
(0) Sending delayed response
(0) Sent Access-Reject Id 149 from 192.168.20.2:1812 to 192.168.200.238:49881 length 20
Waking up in 3.9 seconds.
(0) Cleaning up request packet ID 149 with timestamp +10 due to cleanup_delay was reached
As you can see no password in that unless I'm missing something which maybe but when I do a radtest I get accept-accept below the debug log from doing it that way
radtest testing test localhost 2 testing123 root#docker-host
Sent Access-Request Id 76 from 0.0.0.0:39308 to 127.0.0.1:1812 length 77
User-Name = "testing"
User-Password = "test"
NAS-IP-Address = 127.0.1.1
NAS-Port = 2
Message-Authenticator = 0x00
Cleartext-Password = "test"
Received Access-Accept Id 76 from 127.0.0.1:1812 to 127.0.0.1:39308 length 20
Ready to process requests
q(1) Received Access-Request Id 163 from 127.0.0.1:53905 to 127.0.0.1:1812 length 77
(1) User-Name = "testing"
(1) User-Password = "test"
(1) NAS-IP-Address = 127.0.1.1
(1) NAS-Port = 2
(1) Message-Authenticator = 0xfade5a334cefa11b8d1c07ea3ca02fae
(1) # Executing section authorize from file /etc/freeradius/3.0/sites-enabled/my_server
(1) authorize {
rlm_ldap (ldap): Reserved connection (1)
(1) ldap: EXPAND (cn=%{%{Stripped-User-Name}:-%{User-Name}})
(1) ldap: --> (cn=testing)
(1) ldap: Performing search in "ou=users,dc=ldap,DC=alexosaurous,DC=co,DC=nz" with filter "(cn=testing)", scope "sub"
(1) ldap: Waiting for search result...
(1) ldap: User object found at DN "cn=testing,ou=users,dc=ldap,dc=alexosaurous,dc=co,dc=nz"
(1) ldap: Processing user attributes
(1) ldap: WARNING: No "known good" password added. Ensure the admin user has permission to read the password attribute
(1) ldap: WARNING: PAP authentication will *NOT* work with Active Directory (if that is what you were trying to configure)
rlm_ldap (ldap): Released connection (1)
rlm_ldap (ldap): Closing connection (2) - Too many unused connections.
rlm_ldap (ldap): You probably need to lower "min"
rlm_ldap (ldap): Closing expired connection (4) - Hit idle_timeout limit
rlm_ldap (ldap): You probably need to lower "min"
rlm_ldap (ldap): Closing expired connection (3) - Hit idle_timeout limit
(1) [ldap] = ok
(1) if ((ok || updated) && User-Password) {
(1) if ((ok || updated) && User-Password) -> TRUE
(1) if ((ok || updated) && User-Password) {
(1) update {
(1) control:Auth-Type := LDAP
(1) } # update = noop
(1) } # if ((ok || updated) && User-Password) = noop
(1) } # authorize = ok
(1) Found Auth-Type = LDAP
(1) # Executing group from file /etc/freeradius/3.0/sites-enabled/my_server
(1) Auth-Type LDAP {
rlm_ldap (ldap): Reserved connection (0)
(1) ldap: Login attempt by "testing"
(1) ldap: Using user DN from request "cn=testing,ou=users,dc=ldap,dc=alexosaurous,dc=co,dc=nz"
(1) ldap: Waiting for bind result...
(1) ldap: Bind successful
(1) ldap: Bind as user "cn=testing,ou=users,dc=ldap,dc=alexosaurous,dc=co,dc=nz" was successful
rlm_ldap (ldap): Released connection (0)
(1) [ldap] = ok
(1) } # Auth-Type LDAP = ok
(1) Sent Access-Accept Id 163 from 127.0.0.1:1812 to 127.0.0.1:53905 length 20
(1) Finished request
Waking up in 4.9 seconds.
(1) Cleaning up request packet ID 163 with timestamp +67 due to cleanup_delay was reached
Ready to process requests
(2) Received Access-Request Id 210 from 127.0.0.1:49536 to 127.0.0.1:1812 length 77
Dropping packet without response because of error: Received packet from 127.0.0.1 with invalid Message-Authenticator! (Shared secret is incorrect.)
Waking up in 0.3 seconds.
(2) Cleaning up request packet ID 210 with timestamp +109 due to done
Ready to process requests
In that request looks like it got the password and had put it in LDAP then authed the username and password
I'm so very lost as to way the phone over wifi is not sending the password
config below
sites enabled
server my_server {
listen {
type = auth
ipaddr = *
port = 1812
}
authorize {
ldap
if ((ok || updated) && User-Password) {
update {
control:Auth-Type := ldap
}
}
}
authenticate {
Auth-Type LDAP {
ldap
}
}
}
LDAP config
# -*- text -*-
#
# $Id: 1f0ee0383834684c7314a89be40003933023c401 $
#
# Lightweight Directory Access Protocol (LDAP)
#
ldap {
# Note that this needs to match the name(s) in the LDAP server
# certificate, if you're using ldaps. See OpenLDAP documentation
# for the behavioral semantics of specifying more than one host.
server = "auth.domain"
# Port to connect on, defaults to 389. Setting this to 636 will enable
# LDAPS if start_tls (see below) is not able to be used.
port = "389"
# Administrator account for searching and possibly modifying.
identity = "cn=myserviceaccount,dc=domain"
password = ""
# Unless overridden in another section, the dn from which all
# searches will start from.
base_dn = "dc=ldap,dc=alexosaurous,dc=co,dc=nz"
#
# Generic valuepair attribute
#
# If set, this will attribute will be retrieved in addition to any
# mapped attributes.
#
# Values should be in the format:
# <radius attr> <op> <value>
#
# Where:
# <radius attr>: Is the attribute you wish to create
# with any valid list and request qualifiers.
# <op>: Is any assignment attribute (=, :=, +=, -=).
# <value>: Is the value to parse into the new valuepair.
# If the attribute name is wrapped in double
# quotes it will be xlat expanded.
# valuepair_attribute = "radiusAttribute"
#
# Mapping of LDAP directory attributes to RADIUS dictionary attributes.
#
# WARNING: Although this format is almost identical to the unlang
# update section format, it does *NOT* mean that you can use other
# unlang constructs in module configuration files.
#
# Configuration items are in the format:
# <radius attr> <op> <ldap attr>
#
# Where:
# <radius attr>: Is the destination RADIUS attribute
# with any valid list and request qualifiers.
# <op>: Is any assignment attribute (=, :=, +=, -=).
# <ldap attr>: Is the attribute associated with user or
# profile objects in the LDAP directory.
# If the attribute name is wrapped in double
# quotes it will be xlat expanded.
#
# Request and list qualifiers may also be placed after the 'update'
# section name to set defaults destination requests/lists
# for unqualified RADIUS attributes.
#
# Note: LDAP attribute names should be single quoted unless you want
# the name value to be derived from an xlat expansion, or an
# attribute ref.
update {
control:Password-With-Header += 'userPassword'
# control:NT-Password := 'ntPassword'
# reply:Reply-Message := 'radiusReplyMessage'
# reply:Tunnel-Type := 'radiusTunnelType'
# reply:Tunnel-Medium-Type := 'radiusTunnelMediumType'
# reply:Tunnel-Private-Group-ID := 'radiusTunnelPrivategroupId'
# These are provided for backwards compatibility.
# Where only a list is specified as the RADIUS attribute,
# the value of the LDAP attribute is parsed as a valuepair
# in the same format as the 'valuepair_attribute' (above).
# control: += 'radiusCheckAttributes'
# reply: += 'radiusReplyAttributes'
}
# Set to yes if you have eDirectory and want to use the universal
# password mechanism.
# edir = no
# Set to yes if you want to bind as the user after retrieving the
# Cleartext-Password. This will consume the login grace, and
# verify user authorization.
# edir_autz = no
# Note: set_auth_type was removed in v3.x.x
# Equivalent functionality can be achieved by adding the following
# stanza to the authorize {} section of your virtual server.
#
# ldap
# if ((ok || updated) && User-Password) {
# update {
# control:Auth-Type := ldap
# }
# }
#
# User object identification.
#
user {
# Where to start searching in the tree for users
base_dn = "ou=users,dc=ldap,DC=alexosaurous,DC=co,DC=nz"
# Filter for user objects, should be specific enough
# to identify a single user object.
filter = "(cn=%{%{Stripped-User-Name}:-%{User-Name}})"
# Search scope, may be 'base', 'one', sub' or 'children'
# scope = 'sub'
# If this is undefined, anyone is authorised.
# If it is defined, the contents of this attribute
# determine whether or not the user is authorised
# access_attribute = "dialupAccess"
# Control whether the presence of "access_attribute"
# allows access, or denys access.
#
# If "yes", and the access_attribute is present, or
# "no" and the access_attribute is absent then access
# will be allowed.
#
# If "yes", and the access_attribute is absent, or
# "no" and the access_attribute is present, then
# access will not be allowed.
#
# If the value of the access_attribute is "false", it
# will negate the result.
#
# e.g.
# access_positive = yes
# access_attribute = userAccessAllowed
#
# userAccessAllowed = false
#
# Will result in the user being locked out.
# access_positive = yes
}
#
# User membership checking.
#
group {
# Where to start searching in the tree for groups
base_dn = "ou=Groups,dc=ldap,DC=alexosaurous,DC=co,DC=nz"
# Filter for group objects, should match all available
# group objects a user might be a member of.
filter = "(objectClass=posixGroup)"
# Search scope, may be 'base', 'one', sub' or 'children'
# scope = 'sub'
# Attribute that uniquely identifies a group.
# Is used when converting group DNs to group
# names.
name_attribute = cn
# Filter to find group objects a user is a member of.
# That is, group objects with attributes that
# identify members (the inverse of membership_attribute).
membership_filter = "(|(member=%{control:Ldap-UserDn})(memberUid=%{%{Stripped-User-Name}:-%{User-Name}}))"
# The attribute in user objects which contain the names
# or DNs of groups a user is a member of.
#
# Unless a conversion between group name and group DN is
# needed, there's no requirement for the group objects
# referenced to actually exist.
# membership_attribute = "memberOf"
# If cacheable_name or cacheable_dn are enabled,
# all group information for the user will be
# retrieved from the directory and written to LDAP-Group
# attributes appropriate for the instance of rlm_ldap.
#
# For group comparisons these attributes will be checked
# instead of querying the LDAP directory directly.
#
# This feature is intended to be used with rlm_cache.
#
# If you wish to use this feature, you should enable
# the type that matches the format of your check items
# i.e. if your groups are specified as DNs then enable
# cacheable_dn else enable cacheable_name.
# cacheable_name = "no"
# cacheable_dn = "no"
# Override the normal cache attribute (<inst>-LDAP-Group)
# and create a custom attribute. This can help if multiple
# module instances are used in fail-over.
# cache_attribute = "LDAP-Cached-Membership"
}
#
# User profiles. RADIUS profile objects contain sets of attributes
# to insert into the request. These attributes are mapped using
# the same mapping scheme applied to user objects.
#
profile {
# Filter for RADIUS profile objects
# filter = "(objectclass=radiusprofile)"
# The default profile applied to all users.
# default = "cn=radprofile,dc=example,dc=org"
# The list of profiles which are applied (after the default)
# to all users.
# The "User-Profile" attribute in the control list
# will override this setting at run-time.
# attribute = "radiusProfileDn"
}
#
# Bulk load clients from the directory
#
client {
# Where to start searching in the tree for clients
base_dn = "ou=Clients,dc=example,dc=com"
#
# Filter to match client objects
#
filter = '(objectClass=frClient)'
# Search scope, may be 'base', 'one', 'sub' or 'children'
# scope = 'sub'
#
# Client attribute mappings are in the format:
# <client attribute> = <ldap attribute>
#
# Arbitrary attributes (accessible by %{client:<attr>}) are not yet supported.
#
# The following attributes are required:
# * identifier - IPv4 address, or IPv4 address with prefix, or hostname.
# * secret - RADIUS shared secret.
#
# The following attributes are optional:
# * shortname - Friendly name associated with the client
# * nas_type - NAS Type
# * virtual_server - Virtual server to associate the client with
# * require_message_authenticator - Whether we require the Message-Authenticator
# attribute to be present in requests from the client.
#
# Schemas are available in doc/schemas/ldap for openldap and eDirectory
#
attribute {
identifier = 'radiusClientIdentifier'
secret = 'radiusClientSecret'
# shortname = 'radiusClientShortname'
# nas_type = 'radiusClientType'
# virtual_server = 'radiusClientVirtualServer'
# require_message_authenticator = 'radiusClientRequireMa'
}
}
# Load clients on startup
# read_clients = no
#
# Modify user object on receiving Accounting-Request
#
# Useful for recording things like the last time the user logged
# in, or the Acct-Session-ID for CoA/DM.
#
# LDAP modification items are in the format:
# <ldap attr> <op> <value>
#
# Where:
# <ldap attr>: The LDAP attribute to add modify or delete.
# <op>: One of the assignment operators:
# (:=, +=, -=, ++).
# Note: '=' is *not* supported.
# <value>: The value to add modify or delete.
#
# WARNING: If using the ':=' operator with a multi-valued LDAP
# attribute, all instances of the attribute will be removed and
# replaced with a single attribute.
accounting {
reference = "%{tolower:type.%{Acct-Status-Type}}"
type {
start {
update {
description := "Online at %S"
}
}
interim-update {
update {
description := "Last seen at %S"
}
}
stop {
update {
description := "Offline at %S"
}
}
}
}
#
# Post-Auth can modify LDAP objects too
#
post-auth {
update {
description := "Authenticated at %S"
}
}
#
# LDAP connection-specific options.
#
# These options set timeouts, keep-alives, etc. for the connections.
#
options {
# Control under which situations aliases are followed.
# May be one of 'never', 'searching', 'finding' or 'always'
# default: libldap's default which is usually 'never'.
#
# LDAP_OPT_DEREF is set to this value.
# dereference = 'always'
#
# The following two configuration items control whether the
# server follows references returned by LDAP directory.
# They are mostly for Active Directory compatibility.
# If you set these to "no", then searches will likely return
# "operations error", instead of a useful result.
#
chase_referrals = yes
rebind = yes
# Seconds to wait for LDAP query to finish. default: 20
timeout = 10
# Seconds LDAP server has to process the query (server-side
# time limit). default: 20
#
# LDAP_OPT_TIMELIMIT is set to this value.
timelimit = 3
# Seconds to wait for response of the server. (network
# failures) default: 10
#
# LDAP_OPT_NETWORK_TIMEOUT is set to this value.
net_timeout = 1
# LDAP_OPT_X_KEEPALIVE_IDLE
idle = 60
# LDAP_OPT_X_KEEPALIVE_PROBES
probes = 3
# LDAP_OPT_X_KEEPALIVE_INTERVAL
interval = 3
# ldap_debug: debug flag for LDAP SDK
# (see OpenLDAP documentation). Set this to enable
# huge amounts of LDAP debugging on the screen.
# You should only use this if you are an LDAP expert.
#
# default: 0x0000 (no debugging messages)
# Example:(LDAP_DEBUG_FILTER+LDAP_DEBUG_CONNS)
ldap_debug = 0x0028
}
#
# This subsection configures the tls related items
# that control how FreeRADIUS connects to an LDAP
# server. It contains all of the "tls_*" configuration
# entries used in older versions of FreeRADIUS. Those
# configuration entries can still be used, but we recommend
# using these.
#
tls {
# Set this to 'yes' to use TLS encrypted connections
# to the LDAP database by using the StartTLS extended
# operation.
#
# The StartTLS operation is supposed to be
# used with normal ldap connections instead of
# using ldaps (port 636) connections
start_tls = no
# ca_file = ${certdir}/cacert.pem
# ca_path = ${certdir}
# certificate_file = /path/to/radius.crt
# private_key_file = /path/to/radius.key
# random_file = ${certdir}/random
# Certificate Verification requirements. Can be:
# "never" (don't even bother trying)
# "allow" (try, but don't fail if the certificate
# can't be verified)
# "demand" (fail if the certificate doesn't verify.)
#
# The default is "allow"
# require_cert = "demand"
}
# As of version 3.0, the "pool" section has replaced the
# following configuration items:
#
# ldap_connections_number
# The connection pool is new for 3.0, and will be used in many
# modules, for all kinds of connection-related activity.
#
# When the server is not threaded, the connection pool
# limits are ignored, and only one connection is used.
pool {
# Number of connections to start
start = 5
# Minimum number of connections to keep open
min = 4
# Maximum number of connections
#
# If these connections are all in use and a new one
# is requested, the request will NOT get a connection.
#
# Setting 'max' to LESS than the number of threads means
# that some threads may starve, and you will see errors
# like "No connections available and at max connection limit"
#
# Setting 'max' to MORE than the number of threads means
# that there are more connections than necessary.
max = ${thread[pool].max_servers}
# Spare connections to be left idle
#
# NOTE: Idle connections WILL be closed if "idle_timeout"
# is set.
spare = 3
# Number of uses before the connection is closed
#
# 0 means "infinite"
uses = 0
# The lifetime (in seconds) of the connection
lifetime = 0
# Idle timeout (in seconds). A connection which is
# unused for this length of time will be closed.
idle_timeout = 60
# NOTE: All configuration settings are enforced. If a
# connection is closed because of "idle_timeout",
# "uses", or "lifetime", then the total number of
# connections MAY fall below "min". When that
# happens, it will open a new connection. It will
# also log a WARNING message.
#
# The solution is to either lower the "min" connections,
# or increase lifetime/idle_timeout.
}
}
side note my user filter is a bit different as I used authentik LDAP outpost
and as per
https://goauthentik.io/docs/providers/ldap
the username is mapped to cn
Thank you for taking the time to read all of this by the way
Assuming you're using EAP-PEAP, are the passwords being stored in your LDAP directory as either plaintext (not advisable in production) or NTLM hashes?
If they're being stored as SHA hashes for example, you'll run into an issue of no known good password as the supplicant will respond to the access-challenge from the NAS with an NTLM hash which freeradius won't be able to use to calculate the corresponding SHA hash it receives from the LDAP server after binding.
When you're using radtest, you're sending a plaintext password which freeradius can convert to the appropriate hash for comparison.
If you're not using PEAP and/or your passwords are stored in your directory as plaintext or NTLM hashes, you can disregard this.

How to visualize a clog2 file from MPI/MPE using jumpshot

I am following the examples given in the “Using MPI” book and am working on the example that turns on MPE logging (pmatmatlog.c - ported it from the Fortran example). It runs and produces a log file called “pmatmat.log.clog2”. I would like to visualize this log file.
I started by trying to use “jumpshot-4”, because that is what is installed and I cannot find a way to download jumpshot-2 (which appears to favor log files in the clog2 format?). Jumpshot-4, however, wants files in slog2 format and gives an error about not finding “clog2TOslog2” in the TAU directory tree.
Looking at the head of the clog2 file, it appears to be correct as far as I can tell:
$ head pmatmat.log.clog2
CLOG-02.44is_big_endian=TRUE is_finalzed=TRUE block_size=65536num_buffered_blocks=128max_comm_world_size=4max_thread_count=1known_eventID_start=0user_eventID_start=600known_solo_eventID_start=-10user_solo_eventID_start=5000known_stateID_count=300user_stateID_count=4known_solo_eventID_count=0user_solo_eventID_count=0commtable_fptr=0107374182466560>? … <and on into a lot of unreadable binary>
When I try to convert my file clog2 file using the command line calling “clog2TOslog2”, I get the following:
$ Clog2ToSlog2 ./pmatmat.log.clog2
GUI_LIBDIR is set. GUI_LIBDIR = /Users/markrbower/mpi/lib
**** Error! State!=State
**** Error! State!=State
**** Error! State!=State
**** Error! State!=State
java.lang.reflect.InvocationTargetException
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at logformat.clog2TOdrawable.InputLog$ContentIterator.hasNext(InputLog.java:481)
at logformat.clog2TOdrawable.InputLog.peekNextKind(InputLog.java:58)
at logformat.slog2.output.Clog2ToSlog2.main(Clog2ToSlog2.java:77)
Caused by: logformat.clog2TOdrawable.NoMatchingEventException: No matching State end-event for Record RecHeader[ time=1.9073486328125E-5, icomm=0, rank=1, thread=0, rectype=6 ], RecCargo[ etype=1, bytes=bstartbendcomputecomputedpma ]
at logformat.clog2TOdrawable.Topo_State.matchFinalEvent(Topo_State.java:95)
... 7 more
java.lang.reflect.InvocationTargetException
After a lot of errors that all look to be “InvocationTargetException” as above, the tail says:
…
SLOG-2 Header:
version = SLOG 2.0.6
NumOfChildrenPerNode = 2
TreeLeafByteSize = 65536
MaxTreeDepth = 0
MaxBufferByteSize = 1346
Categories is FBinfo(157 # 1454)
MethodDefs is FBinfo(0 # 0)
LineIDMaps is FBinfo(232 # 1611)
TreeRoot is FBinfo(1346 # 108)
TreeDir is FBinfo(38 # 1843)
Annotations is FBinfo(0 # 0)
Postamble is FBinfo(0 # 0)
Number of Drawables = 20
Number of Unmatched Events = 0
Total ByteSize of the logfile = 8320
timeElapsed between 1 & 2 = 13 msec
timeElapsed between 2 & 3 = 79 msec
$
There are several routes I could see to finally get to visualizing the log file:
1. re-install TAU and hope that allows jumpshot-4 to find the converter
2. install MPI2 with MPE2 and try the newer version of everything
3. find a way to download and install Jumpshot-2 and hope that reads clog2 files
4. find some other way to convert clog2 to slog2
Why isn’t the conversion working and which is the best option to pursue?

Openstack how to properly activate vpnaas log?

We have an openstack cluster build with openstack ansible, we are very happy with it. Actually i am trying to set an VPN. We have activate all necessary thing and tried successfully between our openstack and a sonicwall. We are trying now with a customer unfortunately the connection don't come up and i am looking to find some log but it seems that's nothing is logged.
We are on Openstack Ussuri and Ubuntu 20.04
We have activate Strongswan
Below some config file:
Controller-node:
/etc/neutron/neutron.conf
[DEFAULT]
# Disable stderr logging
use_stderr = false
debug = true
publish_errors = true
fatal_deprecations = False
use_journal = True
## Rpc all
executor_thread_pool_size = 64
rpc_response_timeout = 60
transport_url = hide
# Domain to use for building hostnames
dns_domain = openstacklocal
# Agent
[agent]
polling_interval = 5
report_interval = 60
root_helper = sudo /openstack/venvs/neutron-21.0.0/bin/neutron-rootwrap
/etc/neutron/rootwrap.conf
root_helper_daemon = sudo /openstack/venvs/neutron-21.0.0/bin/neutron-
rootwrap-daemon /etc/neutron/rootwrap.conf
# Messaging
[oslo_messaging_rabbit]
ssl = True
rpc_conn_pool_size = 30
# Notifications
[oslo_messaging_notifications]
topics = notifications
driver = messagingv2
transport_url = hide
# Concurrency (locking mechanisms)
[oslo_concurrency]
lock_path = /var/lock/neutron
/etc/neutron/l3_agent.ini:
[DEFAULT]
debug = True
# Drivers
interface_driver = linuxbridge
agent_mode = legacy
# Conventional failover
allow_automatic_l3agent_failover = True
# HA failover
ha_confs_path = /var/lib/neutron/ha_confs
ha_vrrp_advert_int = 2
ha_vrrp_auth_password = hide
ha_vrrp_auth_type = PASS
# Metadata
enable_metadata_proxy = True
# L3 plugins
# VPNaaS
[vpnagent]
vpn_device_driver = neutron_vpnaas.services.vpn.device_drivers.strongswan_ipsec.StrongSwanDriver
[AGENT]
extensions = vpnaas
/etc/neutron/neutron_vpnaas.conf:
[service_providers]
service_provider = VPN:strongswan:neutron_vpnaas.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default
/etc/neutron/rootwrap.conf:
[DEFAULT]
# List of directories to load filter definitions from (separated by ',').
# These directories MUST all be only writeable by root !
# List of directories to search executables in, in case filters do not
# explicitely specify a full path (separated by ',')
# If not specified, defaults to system PATH environment variable.
# These directories MUST all be only writeable by root !
# Enable logging to syslog
# Default value is False
use_syslog = False
# Which syslog facility to use.
# Valid values include auth, authpriv, syslog, local0, local1...
# Default value is 'syslog'
syslog_log_facility = syslog
# Which messages to log.
# INFO means log all usage
# ERROR means only log unsuccessful attempts
syslog_log_level = ERROR
# Rootwrap daemon exits after this seconds of inactivity
daemon_timeout = 600
filters_path = /etc/neutron/rootwrap.d,/usr/share/neutron/rootwrap
exec_dirs = /openstack/venvs/neutron- 21.0.0/bin,/sbin,/usr/sbin,/bin,/usr/bin,/usr/local/bin,/usr/local/sbin
[xenapi]
# XenAPI configuration is only required by the L2 agent if it is to
# target a XenServer/XCP compute host's dom0.
xenapi_connection_url = <None>
xenapi_connection_username = root
xenapi_connection_password = <None>
/openstack/venvs/neutron-21.0.0/lib/python3.8/site-packages/neutron_vpnaas/services/vpn/device_drivers/template/strongswan/ipsec.conf.template:
# Configuration for {{vpnservice.id}}
config setup
charondebug="ike 4, knl 4,net 4,enc 4,chd 4,esp 4,cfg 2,dmn 4,mgr 4,asn 4"
conn %default
keylife=20m
rekeymargin=3m
keyingtries=1
authby=psk
mobike=no
{% for ipsec_site_connection in vpnservice.ipsec_site_connections%}
conn {{ipsec_site_connection.id}}
keyexchange={{ipsec_site_connection.ikepolicy.ike_version}}
left={{ipsec_site_connection.external_ip}}
leftsubnet={{ipsec_site_connection['local_cidrs']|join(',')}}
leftid={{ipsec_site_connection.local_id}}
leftfirewall=yes
right={{ipsec_site_connection.peer_address}}
rightsubnet={{ipsec_site_connection['peer_cidrs']|join(',')}}
rightid={{ipsec_site_connection.peer_id}}
auto=route
dpdaction={{ipsec_site_connection.dpd_action}}
dpddelay={{ipsec_site_connection.dpd_interval}}s
dpdtimeout={{ipsec_site_connection.dpd_timeout}}s
ike={{ipsec_site_connection.ikepolicy.encryption_algorithm}}-{{ipsec_site_connection.ikepolicy.auth_algorithm}}-{{ipsec_site_connection.ikepolicy.pfs}}
ikelifetime={{ipsec_site_connection.ikepolicy.lifetime_value}}s
{%- if ipsec_site_connection.ipsecpolicy.transform_protocol == "ah" %}
ah={{ipsec_site_connection.ipsecpolicy.auth_algorithm}}-{{ipsec_site_connection.ipsecpolicy.pfs}}
{%- else %}
esp={{ipsec_site_connection.ipsecpolicy.encryption_algorithm}}-{{ipsec_site_connection.ipsecpolicy.auth_algorithm}}-{{ipsec_site_connection.ipsecpolicy.pfs}}
{%- endif %}
lifetime={{ipsec_site_connection.ipsecpolicy.lifetime_value}}s
type={{ipsec_site_connection.ipsecpolicy.encapsulation_mode}}
{% endfor %}
If someone can help me to activate log that's will be great
Thanks
I've finally managed to activate the log, below my config in addition of the first post:
On controllers:
/etc/strongswan.d/charon-logging.conf
charon {
# Section to define file loggers, see LOGGER CONFIGURATION in
# strongswan.conf(5).
filelog {
# <name> may be the full path to the log file if it only contains
# characters permitted in section names. Is ignored if path is
# specified.
charon {
# Loglevel for a specific subsystem.
# <subsystem> = <default>
# If this option is enabled log entries are appended to the existing
# file.
append = no
# Default loglevel.
default = 3
# Enabling this option disables block buffering and enables line
# buffering.
flush_line = yes
# Prefix each log entry with the connection name and a unique
# numerical identifier for each IKE_SA.
ike_name = yes
# Optional path to the log file. Overrides the section name. Must be
# used if the path contains characters that aren't allowed in
# section names.
path = /var/log/charon-ike.log
# Adds the milliseconds within the current second after the
# timestamp (separated by a dot, so time_format should end with %S
# or %T).
time_add_ms = yes
# Prefix each log entry with a timestamp. The option accepts a
# format string as passed to strftime(3).
time_format = %b %e %T
}
}
# Section to define syslog loggers, see LOGGER CONFIGURATION in
# strongswan.conf(5).
syslog {
# Identifier for use with openlog(3).
# identifier = CHARON
# <facility> is one of the supported syslog facilities, see LOGGER
# CONFIGURATION in strongswan.conf(5).
# auth {
# Loglevel for a specific subsystem.
# <subsystem> = <default>
# Default loglevel.
# default = 2
# Prefix each log entry with the connection name and a unique
# numerical identifier for each IKE_SA.
# ike_name = yes
# }
}
}
And most important:
sudo apparmor_parser -R /etc/apparmor.d/usr.lib.ipsec.charon
With this config you will have a log file under /var/log/charon-ike.log .
Maybe you need to restart neutron
Thanks for the help ;)
This it not an answer but it makes it more readable. We didn't touch the charon configs, but this is the /etc/strongswan.d/charon-logging.conf:
charon {
# Section to define file loggers, see LOGGER CONFIGURATION in
# strongswan.conf(5).
filelog {
# <filename> is the full path to the log file.
# <filename> {
# Loglevel for a specific subsystem.
# <subsystem> = <default>
# If this option is enabled log entries are appended to the existing
# file.
# append = yes
# Default loglevel.
# default = 1
# Enabling this option disables block buffering and enables line
# buffering.
# flush_line = no
# Prefix each log entry with the connection name and a unique
# numerical identifier for each IKE_SA.
# ike_name = no
# Prefix each log entry with a timestamp. The option accepts a
# format string as passed to strftime(3).
# time_format =
# }
}
# Section to define syslog loggers, see LOGGER CONFIGURATION in
# strongswan.conf(5).
syslog {
# Identifier for use with openlog(3).
# identifier =
# <facility> is one of the supported syslog facilities, see LOGGER
# CONFIGURATION in strongswan.conf(5).
# <facility> {
# Loglevel for a specific subsystem.
# <subsystem> = <default>
# Default loglevel.
# default = 1
# Prefix each log entry with the connection name and a unique
# numerical identifier for each IKE_SA.
# ike_name = no
# }
}
}

AWS Amplify `No plugin found in Storage for the provider` error

My AWS amplify application uses storage. When I am running the application I am getting a No plugin found in Storage for the provider error. Some of things that I have tried are:
a. removing and adding storage module by `amplify add\remove storage` commands.
b. Manually configuring storage in main.js based on this [github issue][1].
c. deleting the application's node_modules and adding them again.
What could I be missing?
main.js:
import Amplify,{Auth, Storage} from 'aws-amplify';
import '#aws-amplify/ui-vue';
import aws_exports from './aws-exports';
Amplify.configure(aws_exports);
main.js: Added code to manually configure storage as per this github issue:
Amplify.configure({
Auth: {
identityPoolId: '<IdentityPoolId>',
region: "<region>",
userPoolId: "<userPoolId>",
userPoolWebClientId: "<userPoolWebClientId>",
},
Storage: {
AWSS3: {
bucket: "<bucket>",
region: "<region>",
}
}
});
StackTrace:
Uncaught (in promise) No plugin found in Storage for the provider
(anonymous) # 4.js:241
step # 4.js:69
(anonymous) # 4.js:50
(anonymous) # 4.js:44
push../node_modules/#aws-amplify/storage/lib-esm/Storage.js.__awaiter # 4.js:40
push../node_modules/#aws-amplify/storage/lib-esm/Storage.js.Storage.put # 4.js:234
_loop$ # 18.js:286
tryCatch # vendors~app.js:455791
invoke # vendors~app.js:456017
prototype.<computed> # vendors~app.js:455843
tryCatch # vendors~app.js:455791
maybeInvokeDelegate # vendors~app.js:456080
invoke # vendors~app.js:455991
prototype.<computed> # vendors~app.js:455843
asyncGeneratorStep # vendors~app.js:162870
_next # vendors~app.js:162892
(anonymous) # vendors~app.js:162899
F # vendors~app.js:319219
(anonymous) # vendors~app.js:162888
TranscribeFiles # 18.js:392
invokeWithErrorHandling # vendors~app.js:433345
invoker # vendors~app.js:433670
invokeWithErrorHandling # vendors~app.js:433345
Vue.$emit # vendors~app.js:435374
clickButton # vendors~app.js:444622
click # vendors~app.js:444498
invokeWithErrorHandling # vendors~app.js:433345
invoker # vendors~app.js:433670
original._wrapper # vendors~app.js:438399
I was able to get the application working by
a. removing the manual configuration in main.js,
b. removing storage completely,
c. publishing the application to aws,
d. adding storage back again and publishing the application to aws.

Removing Airflow task logs

I'm running 5 DAG's which have generated a total of about 6GB of log data in the base_log_folder over a months period. I just added a remote_base_log_folder but it seems it does not exclude logging to the base_log_folder.
Is there anyway to automatically remove old log files, rotate them or force airflow to not log on disk (base_log_folder) only in remote storage?
Please refer https://github.com/teamclairvoyant/airflow-maintenance-dags
This plugin has DAGs that can kill halted tasks and log-cleanups.
You can grab the concepts and can come up with a new DAG that can cleanup as per your requirement.
We remove the Task logs by implementing our own FileTaskHandler, and then pointing to it in the airflow.cfg. So, we overwrite the default LogHandler to keep only N task logs, without scheduling additional DAGs.
We are using Airflow==1.10.1.
[core]
logging_config_class = log_config.LOGGING_CONFIG
log_config.LOGGING_CONFIG
BASE_LOG_FOLDER = conf.get('core', 'BASE_LOG_FOLDER')
FOLDER_TASK_TEMPLATE = '{{ ti.dag_id }}/{{ ti.task_id }}'
FILENAME_TEMPLATE = '{{ ti.dag_id }}/{{ ti.task_id }}/{{ ts }}/{{ try_number }}.log'
LOGGING_CONFIG = {
'formatters': {},
'handlers': {
'...': {},
'task': {
'class': 'file_task_handler.FileTaskRotationHandler',
'formatter': 'airflow.job',
'base_log_folder': os.path.expanduser(BASE_LOG_FOLDER),
'filename_template': FILENAME_TEMPLATE,
'folder_task_template': FOLDER_TASK_TEMPLATE,
'retention': 20
},
'...': {}
},
'loggers': {
'airflow.task': {
'handlers': ['task'],
'level': JOB_LOG_LEVEL,
'propagate': False,
},
'airflow.task_runner': {
'handlers': ['task'],
'level': LOG_LEVEL,
'propagate': True,
},
'...': {}
}
}
file_task_handler.FileTaskRotationHandler
import os
import shutil
from airflow.utils.helpers import parse_template_string
from airflow.utils.log.file_task_handler import FileTaskHandler
class FileTaskRotationHandler(FileTaskHandler):
def __init__(self, base_log_folder, filename_template, folder_task_template, retention):
"""
:param base_log_folder: Base log folder to place logs.
:param filename_template: template filename string.
:param folder_task_template: template folder task path.
:param retention: Number of folder logs to keep
"""
super(FileTaskRotationHandler, self).__init__(base_log_folder, filename_template)
self.retention = retention
self.folder_task_template, self.folder_task_template_jinja_template = \
parse_template_string(folder_task_template)
#staticmethod
def _get_directories(path='.'):
return next(os.walk(path))[1]
def _render_folder_task_path(self, ti):
if self.folder_task_template_jinja_template:
jinja_context = ti.get_template_context()
return self.folder_task_template_jinja_template.render(**jinja_context)
return self.folder_task_template.format(dag_id=ti.dag_id, task_id=ti.task_id)
def _init_file(self, ti):
relative_path = self._render_folder_task_path(ti)
folder_task_path = os.path.join(self.local_base, relative_path)
subfolders = self._get_directories(folder_task_path)
to_remove = set(subfolders) - set(subfolders[-self.retention:])
for dir_to_remove in to_remove:
full_dir_to_remove = os.path.join(folder_task_path, dir_to_remove)
print('Removing', full_dir_to_remove)
shutil.rmtree(full_dir_to_remove)
return FileTaskHandler._init_file(self, ti)
Airflow maintainers don't think truncating logs is a part of airflow core logic, to see this, and then in this issue, maintainers suggest to change LOG_LEVEL avoid too many log data.
And in this PR, we can learn how to change log level in airflow.cfg.
good luck.
I know it sounds savage, but have you tried pointing base_log_folder to /dev/null? I use Airflow as a part of a container, so I don't care about the files either, as long as the logger pipe to STDOUT as well.
Not sure how well this plays with S3 though.
For your concrete problems, I have some suggestions.
For those, you would always need a specialized logging config as described in this answer: https://stackoverflow.com/a/54195537/2668430
automatically remove old log files and rotate them
I don't have any practical experience with the TimedRotatingFileHandler from the Python standard library yet, but you might give it a try:
https://docs.python.org/3/library/logging.handlers.html#timedrotatingfilehandler
It not only offers to rotate your files based on a time interval, but if you specify the backupCount parameter, it even deletes your old log files:
If backupCount is nonzero, at most backupCount files will be kept, and if more would be created when rollover occurs, the oldest one is deleted. The deletion logic uses the interval to determine which files to delete, so changing the interval may leave old files lying around.
Which sounds pretty much like the best solution for your first problem.
force airflow to not log on disk (base_log_folder), but only in remote storage?
In this case you should specify the logging config in such a way that you do not have any logging handlers that write to a file, i.e. remove all FileHandlers.
Rather, try to find logging handlers that send the output directly to a remote address.
E.g. CMRESHandler which logs directly to ElasticSearch but needs some extra fields in the log calls.
Alternatively, write your own handler class and let it inherit from the Python standard library's HTTPHandler.
A final suggestion would be to combine both the TimedRotatingFileHandler and setup ElasticSearch together with FileBeat, so you would be able to store your logs inside ElasticSearch (i.e. remote), but you wouldn't store a huge amount of logs on your Airflow disk since they will be removed by the backupCount retention policy of your TimedRotatingFileHandler.
Usually apache airflow grab the disk space due to 3 reasons
1. airflow scheduler logs files
2. mysql binaly logs [Major]
3. xcom table records.
To make it clean up on regular basis I have set up a dag which run on daily basis and cleans the binary logs and truncate the xcom table to make the disk space free
You also might need to install [pip install mysql-connector-python].
To clean up scheduler log files I do delete them manually two times in a week to avoid the risk of logs deleted which needs to be required for some reasons.
I clean the logs files by [sudo rm -rd airflow/logs/] command.
Below is my python code for reference
'
"""Example DAG demonstrating the usage of the PythonOperator."""
from airflow import DAG
from airflow.operators.python import PythonOperator
from datetime import datetime, timedelta
from airflow.utils.dates import days_ago
from airflow.operators.bash import BashOperator
from airflow.providers.postgres.operators.postgres import PostgresOperator
args = {
'owner': 'airflow',
'email_on_failure':True,
'retries': 1,
'email':['Your Email Id'],
'retry_delay': timedelta(minutes=5)
}
dag = DAG(
dag_id='airflow_logs_cleanup',
default_args=args,
schedule_interval='#daily',
start_date=days_ago(0),
catchup=False,
max_active_runs=1,
tags=['airflow_maintenance'],
)
def truncate_table():
import mysql.connector
connection = mysql.connector.connect(host='localhost',
database='db_name',
user='username',
password='your password',
auth_plugin='mysql_native_password')
cursor = connection.cursor()
sql_select_query = """TRUNCATE TABLE xcom"""
cursor.execute(sql_select_query)
connection.commit()
connection.close()
print("XCOM Table truncated successfully")
def delete_binary_logs():
import mysql.connector
from datetime import datetime
date = datetime.today().strftime('%Y-%m-%d')
connection = mysql.connector.connect(host='localhost',
database='db_name',
user='username',
password='your_password',
auth_plugin='mysql_native_password')
cursor = connection.cursor()
query = 'PURGE BINARY LOGS BEFORE ' + "'" + str(date) + "'"
sql_select_query = query
cursor.execute(sql_select_query)
connection.commit()
connection.close()
print("Binary logs deleted successfully")
t1 = PythonOperator(
task_id='truncate_table',
python_callable=truncate_table, dag=dag
)
t2 = PythonOperator(
task_id='delete_binary_logs',
python_callable=delete_binary_logs, dag=dag
)
t2 << t1
'
I am surprized but it worked for me. Update your config as below:
base_log_folder=""
It is test in minio and in s3.
Our solution looks a lot like Franzi's:
Running on Airflow 2.0.1 (py3.8)
Override default logging configuration
Since we use a helm chart for airflow deployment it was easiest to push an env there, but it can also be done in the airflow.cfg or using ENV in dockerfile.
# Set custom logging configuration to enable log rotation for task logging
AIRFLOW__LOGGING__LOGGING_CONFIG_CLASS: "airflow_plugins.settings.airflow_local_settings.DEFAULT_LOGGING_CONFIG"
Then we added the logging configuration together with the custom log handler to a python module we build and install in the docker image. As described here: https://airflow.apache.org/docs/apache-airflow/stable/modules_management.html
Logging configuration snippet
This is only a copy on the default from the airflow codebase, but then the task logger gets a different handler.
DEFAULT_LOGGING_CONFIG: Dict[str, Any] = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'airflow': {'format': LOG_FORMAT},
'airflow_coloured': {
'format': COLORED_LOG_FORMAT if COLORED_LOG else LOG_FORMAT,
'class': COLORED_FORMATTER_CLASS if COLORED_LOG else 'logging.Formatter',
},
},
'handlers': {
'console': {
'class': 'airflow.utils.log.logging_mixin.RedirectStdHandler',
'formatter': 'airflow_coloured',
'stream': 'sys.stdout',
},
'task': {
'class': 'airflow_plugins.log.rotating_file_task_handler.RotatingFileTaskHandler',
'formatter': 'airflow',
'base_log_folder': os.path.expanduser(BASE_LOG_FOLDER),
'filename_template': FILENAME_TEMPLATE,
'maxBytes': 10485760, # 10MB
'backupCount': 6,
},
...
RotatingFileTaskHandler
And finally the custom handler which is just a merge of the logging.handlers.RotatingFileHandler and the FileTaskHandler.
#
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
"""File logging handler for tasks."""
import logging
import os
from pathlib import Path
from typing import TYPE_CHECKING, Optional
import requests
from airflow.configuration import AirflowConfigException, conf
from airflow.utils.helpers import parse_template_string
if TYPE_CHECKING:
from airflow.models import TaskInstance
class RotatingFileTaskHandler(logging.Handler):
"""
FileTaskHandler is a python log handler that handles and reads
task instance logs. It creates and delegates log handling
to `logging.FileHandler` after receiving task instance context.
It reads logs from task instance's host machine.
:param base_log_folder: Base log folder to place logs.
:param filename_template: template filename string
"""
def __init__(self, base_log_folder: str, filename_template: str, maxBytes=0, backupCount=0):
self.max_bytes = maxBytes
self.backup_count = backupCount
super().__init__()
self.handler = None # type: Optional[logging.FileHandler]
self.local_base = base_log_folder
self.filename_template, self.filename_jinja_template = parse_template_string(filename_template)
def set_context(self, ti: "TaskInstance"):
"""
Provide task_instance context to airflow task handler.
:param ti: task instance object
"""
local_loc = self._init_file(ti)
self.handler = logging.handlers.RotatingFileHandler(
filename=local_loc,
mode='a',
maxBytes=self.max_bytes,
backupCount=self.backup_count,
encoding='utf-8',
delay=False,
)
if self.formatter:
self.handler.setFormatter(self.formatter)
self.handler.setLevel(self.level)
def emit(self, record):
if self.handler:
self.handler.emit(record)
def flush(self):
if self.handler:
self.handler.flush()
def close(self):
if self.handler:
self.handler.close()
def _render_filename(self, ti, try_number):
if self.filename_jinja_template:
if hasattr(ti, 'task'):
jinja_context = ti.get_template_context()
jinja_context['try_number'] = try_number
else:
jinja_context = {
'ti': ti,
'ts': ti.execution_date.isoformat(),
'try_number': try_number,
}
return self.filename_jinja_template.render(**jinja_context)
return self.filename_template.format(
dag_id=ti.dag_id,
task_id=ti.task_id,
execution_date=ti.execution_date.isoformat(),
try_number=try_number,
)
def _read_grouped_logs(self):
return False
def _read(self, ti, try_number, metadata=None): # pylint: disable=unused-argument
"""
Template method that contains custom logic of reading
logs given the try_number.
:param ti: task instance record
:param try_number: current try_number to read log from
:param metadata: log metadata,
can be used for steaming log reading and auto-tailing.
:return: log message as a string and metadata.
"""
# Task instance here might be different from task instance when
# initializing the handler. Thus explicitly getting log location
# is needed to get correct log path.
log_relative_path = self._render_filename(ti, try_number)
location = os.path.join(self.local_base, log_relative_path)
log = ""
if os.path.exists(location):
try:
with open(location) as file:
log += f"*** Reading local file: {location}\n"
log += "".join(file.readlines())
except Exception as e: # pylint: disable=broad-except
log = f"*** Failed to load local log file: {location}\n"
log += "*** {}\n".format(str(e))
elif conf.get('core', 'executor') == 'KubernetesExecutor': # pylint: disable=too-many-nested-blocks
try:
from airflow.kubernetes.kube_client import get_kube_client
kube_client = get_kube_client()
if len(ti.hostname) >= 63:
# Kubernetes takes the pod name and truncates it for the hostname. This truncated hostname
# is returned for the fqdn to comply with the 63 character limit imposed by DNS standards
# on any label of a FQDN.
pod_list = kube_client.list_namespaced_pod(conf.get('kubernetes', 'namespace'))
matches = [
pod.metadata.name
for pod in pod_list.items
if pod.metadata.name.startswith(ti.hostname)
]
if len(matches) == 1:
if len(matches[0]) > len(ti.hostname):
ti.hostname = matches[0]
log += '*** Trying to get logs (last 100 lines) from worker pod {} ***\n\n'.format(
ti.hostname
)
res = kube_client.read_namespaced_pod_log(
name=ti.hostname,
namespace=conf.get('kubernetes', 'namespace'),
container='base',
follow=False,
tail_lines=100,
_preload_content=False,
)
for line in res:
log += line.decode()
except Exception as f: # pylint: disable=broad-except
log += '*** Unable to fetch logs from worker pod {} ***\n{}\n\n'.format(ti.hostname, str(f))
else:
url = os.path.join("http://{ti.hostname}:{worker_log_server_port}/log", log_relative_path).format(
ti=ti, worker_log_server_port=conf.get('celery', 'WORKER_LOG_SERVER_PORT')
)
log += f"*** Log file does not exist: {location}\n"
log += f"*** Fetching from: {url}\n"
try:
timeout = None # No timeout
try:
timeout = conf.getint('webserver', 'log_fetch_timeout_sec')
except (AirflowConfigException, ValueError):
pass
response = requests.get(url, timeout=timeout)
response.encoding = "utf-8"
# Check if the resource was properly fetched
response.raise_for_status()
log += '\n' + response.text
except Exception as e: # pylint: disable=broad-except
log += "*** Failed to fetch log file from worker. {}\n".format(str(e))
return log, {'end_of_log': True}
def read(self, task_instance, try_number=None, metadata=None):
"""
Read logs of given task instance from local machine.
:param task_instance: task instance object
:param try_number: task instance try_number to read logs from. If None
it returns all logs separated by try_number
:param metadata: log metadata,
can be used for steaming log reading and auto-tailing.
:return: a list of listed tuples which order log string by host
"""
# Task instance increments its try number when it starts to run.
# So the log for a particular task try will only show up when
# try number gets incremented in DB, i.e logs produced the time
# after cli run and before try_number + 1 in DB will not be displayed.
if try_number is None:
next_try = task_instance.next_try_number
try_numbers = list(range(1, next_try))
elif try_number < 1:
logs = [
[('default_host', f'Error fetching the logs. Try number {try_number} is invalid.')],
]
return logs, [{'end_of_log': True}]
else:
try_numbers = [try_number]
logs = [''] * len(try_numbers)
metadata_array = [{}] * len(try_numbers)
for i, try_number_element in enumerate(try_numbers):
log, metadata = self._read(task_instance, try_number_element, metadata)
# es_task_handler return logs grouped by host. wrap other handler returning log string
# with default/ empty host so that UI can render the response in the same way
logs[i] = log if self._read_grouped_logs() else [(task_instance.hostname, log)]
metadata_array[i] = metadata
return logs, metadata_array
def _init_file(self, ti):
"""
Create log directory and give it correct permissions.
:param ti: task instance object
:return: relative log path of the given task instance
"""
# To handle log writing when tasks are impersonated, the log files need to
# be writable by the user that runs the Airflow command and the user
# that is impersonated. This is mainly to handle corner cases with the
# SubDagOperator. When the SubDagOperator is run, all of the operators
# run under the impersonated user and create appropriate log files
# as the impersonated user. However, if the user manually runs tasks
# of the SubDagOperator through the UI, then the log files are created
# by the user that runs the Airflow command. For example, the Airflow
# run command may be run by the `airflow_sudoable` user, but the Airflow
# tasks may be run by the `airflow` user. If the log files are not
# writable by both users, then it's possible that re-running a task
# via the UI (or vice versa) results in a permission error as the task
# tries to write to a log file created by the other user.
relative_path = self._render_filename(ti, ti.try_number)
full_path = os.path.join(self.local_base, relative_path)
directory = os.path.dirname(full_path)
# Create the log file and give it group writable permissions
# TODO(aoen): Make log dirs and logs globally readable for now since the SubDag
# operator is not compatible with impersonation (e.g. if a Celery executor is used
# for a SubDag operator and the SubDag operator has a different owner than the
# parent DAG)
Path(directory).mkdir(mode=0o777, parents=True, exist_ok=True)
if not os.path.exists(full_path):
open(full_path, "a").close()
# TODO: Investigate using 444 instead of 666.
os.chmod(full_path, 0o666)
return full_path
Maybe a final note; the links in the airflow UI to the logging will now only open the latest logfile, not the older rotated files which are only accessible by means of SSH or any other interface to access the airflow logging path.
I don't think that there is a rotation mechanism but you can store them in S3 or google cloud storage as describe here : https://airflow.incubator.apache.org/configuration.html#logs

Resources