What does the numbers at the beginning of prink() mean? - printk

If you use prink() to print kernel message and read it in the console, it looks like this:
<6>[ 2809.666228] amp_enable: amp enable bypass(2)
<6>[ 2809.666747] amp_enable: AMP_EN is set to 0
<3>[ 2810.084296] init: untracked pid 4196 exited
<3>[ 2810.873706] init: untracked pid 4817 exited
<6>[ 2810.933923] msm_ta_detect_work: USB exit ta detection - frindex
<6>[ 2817.483839] amp_enable: AMP_EN is set to 1
<6>[ 2823.084022] adjust_soc: ibat_ua = -114500, vbat_uv = 4296066, soc = 95, batt_temp=302
<6>[ 2823.669799] SLIM_CL: skip reconfig sequence
<6>[ 2823.685578] amp_enable: amp enable bypass(2)
<6>[ 2823.686372] amp_enable: AMP_EN is set to 0
What does the number at the beginning of each line mean? Is it some kind of time stamp? How do I interpret it?

As NG mentioned, the <3> and <6> are log levels where <3> is KERN_ERR and <6> is KERN_INFO.
Here's a list picked up from http://tuxthink.blogspot.com/2012/07/printk-and-console-log-level.html.
0 KERN_EMERG
1 KERN_ALERT
2 KERN_CRIT
3 KERN_ERR
4 KERN_WARNING
5 KERN_NOTICE
6 KERN_INFO
7 KERN_DEBUG
The next number appears to be time in seconds since system boot. Did your system boot ~50 minutes ago when you saw these messages? The timestamps can help us track how long a task took. For example, amp_enable: amp enable bypass(2) took 0.519ms to complete. From first entry to the fourth entry, it took 1.207478s.
<6>[ 2809.666228] amp_enable: amp enable bypass(2)
<6>[ 2809.666747] amp_enable: AMP_EN is set to 0
<3>[ 2810.084296] init: untracked pid 4196 exited
<3>[ 2810.873706] init: untracked pid 4817 exited
I learned this by visiting http://elinux.org/Printk_Times_Sample1.

Related

Airflow executes my python dag-script once a second (one by one) ignoring min_file_process_interval (=30 sec)

Hi all airflow specialists
I have only one DAG-script which is using for my platform purposes.
My airflow only on dev (on qa and prod all is ok) has strange behaviour:
when scheduler starts to execute my dag at this moment scheduler reads and executes my python scripts each seconds instead of 30 seconds.
After execution of DAG (and couple minutes after) scheduler returns to usual behaviour - read my script each 30 seconds (as min_file_process_interval tells).
This happens only in one installation (DEV), in other all is ok (airflow reads and executes the same my script once per 30 seconds).
I create some crutch which is check how much time has passed from previous script reading:
import pendulum
START_DATE, CURRECT_DATE = (pendulum.now('UTC'), ) * 2
from airflow.configuration import conf
min_file_process_interval=conf.get("core", "min_file_process_interval")
ths_scrpt_last_run_at = Variable.get('CMOCEAN_BATCHFLOW_ORCHESTRATOR_LAST_RUN_AT', default_var=None)
if ths_scrpt_last_run_at:
ths_scrpt_last_run_at = parser.parse(ths_scrpt_last_run_at)
else:
#its for script 1st time start, when you havnt my variable
some_var=Variable.set('CMOCEAN_BATCHFLOW_ORCHESTRATOR_LAST_RUN_AT',str(START_DATE))
ths_scrpt_last_run_at=CURRECT_DATE - 2* timedelta(seconds=min_file_process_interval)
if ths_scrpt_last_run_at > CURRECT_DATE - timedelta(seconds=min_file_process_interval):
#Started earlier than needed [min_file_process_interval], its wrong behaviour
print('Period '+str(min_file_process_interval)+' seconds has not finished yet. Stop the script execution')
sys.exit("Period "+str(min_file_process_interval)+" sec is not finished, stop the script execution")
else:
#This is fine, got needed timeout.Just remember script start time to variable (for next executions)
some_var=Variable.delete('CMOCEAN_BATCHFLOW_ORCHESTRATOR_LAST_RUN_AT')
some_var=Variable.set('CMOCEAN_BATCHFLOW_ORCHESTRATOR_LAST_RUN_AT',str(START_DATE))
So, when problem moment starts I see in my log messages "Period 30 seconds has not finished yet. Stop the script execution":
[2022-11-16 08:44:45,076] {processor.py:153} INFO - Started process (PID=52935) to work on /opt/bitnami/airflow/dags/some_dir/orchestrator.py
[2022-11-16 08:44:45,077] {processor.py:641} INFO - Processing file /opt/bitnami/airflow/dags/some_dir/orchestrator.py for tasks to queue
[2022-11-16 08:44:45,077] {logging_mixin.py:115} INFO - [2022-11-16 08:44:45,077] {dagbag.py:507} INFO - Filling up the DagBag from /opt/bitnami/airflow/dags/some_dir/orchestrator.py
[2022-11-16 08:44:45,092] {logging_mixin.py:115} INFO - ========================================== NEW START ==========================================
[2022-11-16 08:44:45,092] {logging_mixin.py:115} INFO - Scheduler works
[2022-11-16 08:44:45,120] {logging_mixin.py:115} INFO - Period 30 seconds has not finished yet. Stop the script execution
[2022-11-16 08:44:46,130] {processor.py:153} INFO - Started process (PID=52936) to work on /opt/bitnami/airflow/dags/some_dir/orchestrator.py
[2022-11-16 08:44:46,131] {processor.py:641} INFO - Processing file /opt/bitnami/airflow/dags/some_dir/orchestrator.py for tasks to queue
[2022-11-16 08:44:46,132] {logging_mixin.py:115} INFO - [2022-11-16 08:44:46,132] {dagbag.py:507} INFO - Filling up the DagBag from /opt/bitnami/airflow/dags/some_dir/orchestrator.py
[2022-11-16 08:44:46,147] {logging_mixin.py:115} INFO - ========================================== NEW START ==========================================
[2022-11-16 08:44:46,147] {logging_mixin.py:115} INFO - Scheduler works
[2022-11-16 08:44:46,170] {logging_mixin.py:115} INFO - Period 30 seconds has not finished yet. Stop the script execution
As you can see its not some cicle in my script, it is scheduler repeats the reading of my script (because PIDs is different, as usual in 30 seconds mode. In example is PID=52935, PID=52936)
As you can see dagbag_import_timeout parameter is got from config successfully (I use it in my script and put it to log via print), but scheduler is just ignoring it at the issue moment.
Airflow version is 2.3.2
My config:
[core]
dags_folder=/opt/bitnami/airflow/dags
hostname_callable=socket.getfqdn
default_timezone=utc
executor=CeleryExecutor
parallelism=32
max_active_tasks_per_dag=16
dags_are_paused_at_creation=True
max_active_runs_per_dag=16
load_examples=False
plugins_folder=/opt/bitnami/airflow/plugins
execute_tasks_new_python_interpreter=False
fernet_key=FlXNrJzmw-2VrOBAd8dqFBNJX4DH1SZTdPq9FFMZoQo=
donot_pickle=True
dagbag_import_timeout=30.0
dagbag_import_error_tracebacks=True
dagbag_import_error_traceback_depth=2
dag_file_processor_timeout=50
task_runner=StandardTaskRunner
default_impersonation=
security=
unit_test_mode=False
enable_xcom_pickling=False
killed_task_cleanup_time=60
dag_run_conf_overrides_params=True
dag_discovery_safe_mode=True
dag_ignore_file_syntax=regexp
default_task_retries=0
default_task_weight_rule=downstream
default_task_execution_timeout=
min_serialized_dag_update_interval=30
compress_serialized_dags=False
min_serialized_dag_fetch_interval=10
max_num_rendered_ti_fields_per_task=30
check_slas=True
xcom_backend=airflow.models.xcom.BaseXCom
lazy_load_plugins=True
lazy_discover_providers=True
hide_sensitive_var_conn_fields=True
sensitive_var_conn_names=
default_pool_task_slot_count=128
max_map_length=1024
[database]
sql_alchemy_conn=postgresql+psycopg2://airflow:airflow#airflow-dev-postgresql:5432/airflow
sql_engine_encoding=utf-8
sql_alchemy_pool_enabled=True
sql_alchemy_pool_size=5
sql_alchemy_max_overflow=10
sql_alchemy_pool_recycle=1800
sql_alchemy_pool_pre_ping=True
sql_alchemy_schema=
load_default_connections=True
max_db_retries=3
[logging]
base_log_folder=/opt/bitnami/airflow/logs
remote_logging=False
remote_log_conn_id=
google_key_path=
remote_base_log_folder=
encrypt_s3_logs=False
logging_level=INFO
celery_logging_level=
fab_logging_level=WARNING
logging_config_class=
colored_console_log=True
colored_log_format=[%%(blue)s%%(asctime)s%%(reset)s] {%%(blue)s%%(filename)s:%%(reset)s%%(lineno)d} %%(log_color)s%%(levelname)s%%(reset)s - %%(log_color)s%%(message)s%%(reset)s
colored_formatter_class=airflow.utils.log.colored_log.CustomTTYColoredFormatter
log_format=[%%(asctime)s] {%%(filename)s:%%(lineno)d} %%(levelname)s - %%(message)s
simple_log_format=%%(asctime)s %%(levelname)s - %%(message)s
task_log_prefix_template=
log_filename_template=dag_id={{ ti.dag_id }}/run_id={{ ti.run_id }}/task_id={{ ti.task_id }}/{%% if ti.map_index >= 0 %%}map_index={{ ti.map_index }}/{%% endif %%}attempt={{ try_number }}.log
log_processor_filename_template={{ filename }}.log
dag_processor_manager_log_location=/opt/bitnami/airflow/logs/dag_processor_manager/dag_processor_manager.log
task_log_reader=task
extra_logger_names=
worker_log_server_port=8793
[metrics]
statsd_on=False
statsd_host=localhost
statsd_port=8125
statsd_prefix=airflow
statsd_allow_list=
stat_name_handler=
statsd_datadog_enabled=False
statsd_datadog_tags=
[secrets]
backend=
backend_kwargs=
[cli]
api_client=airflow.api.client.local_client
endpoint_url=http://localhost:8080
[debug]
fail_fast=False
[api]
enable_experimental_api=False
auth_backends=airflow.api.auth.backend.session
maximum_page_limit=100
fallback_page_limit=100
google_oauth2_audience=
google_key_path=
access_control_allow_headers=
access_control_allow_methods=
access_control_allow_origins=
[lineage]
backend=
[atlas]
sasl_enabled=False
host=
port=21000
username=
password=
[operators]
default_owner=airflow
default_cpus=1
default_ram=512
default_disk=512
default_gpus=0
default_queue=default
allow_illegal_arguments=False
[hive]
default_hive_mapred_queue=
[webserver]
base_url=http://localhost:8080
default_ui_timezone=UTC
web_server_host=0.0.0.0
web_server_port=8080
web_server_ssl_cert=
web_server_ssl_key=
session_backend=database
web_server_master_timeout=120
web_server_worker_timeout=120
worker_refresh_batch_size=1
worker_refresh_interval=6000
reload_on_plugin_change=False
secret_key=a1pjQkdXZTRtYjFDOENlRklTYld6SVl2NjlMUVJORXY=
workers=4
worker_class=sync
access_logfile=-
error_logfile=-
access_logformat=
expose_config=False
expose_hostname=True
expose_stacktrace=True
dag_default_view=grid
dag_orientation=LR
log_fetch_timeout_sec=5
log_fetch_delay_sec=2
log_auto_tailing_offset=30
log_animation_speed=1000
hide_paused_dags_by_default=False
page_size=100
navbar_color=
default_dag_run_display_number=25
enable_proxy_fix=False
proxy_fix_x_for=1
proxy_fix_x_proto=1
proxy_fix_x_host=1
proxy_fix_x_port=1
proxy_fix_x_prefix=1
cookie_secure=False
cookie_samesite=Lax
default_wrap=False
x_frame_enabled=True
show_recent_stats_for_completed_runs=True
update_fab_perms=True
session_lifetime_minutes=43200
instance_name_has_markup=False
auto_refresh_interval=3
warn_deployment_exposure=True
audit_view_excluded_events=gantt,landing_times,tries,duration,calendar,graph,grid,tree,tree_data
[email]
email_backend=airflow.utils.email.send_email_smtp
email_conn_id=smtp_default
default_email_on_retry=True
default_email_on_failure=True
[smtp]
smtp_host=localhost
smtp_starttls=True
smtp_ssl=False
smtp_port=25
smtp_mail_from=airflow#example.com
smtp_timeout=30
smtp_retry_limit=5
[sentry]
sentry_on=false
sentry_dsn=
[local_kubernetes_executor]
kubernetes_queue=kubernetes
[celery_kubernetes_executor]
kubernetes_queue=kubernetes
[celery]
celery_app_name=airflow.executors.celery_executor
worker_concurrency=16
worker_prefetch_multiplier=1
worker_enable_remote_control=true
worker_umask=0o077
broker_url=redis://:otUjs01rLS#airflow-dev-redis-master:6379/1
result_backend=db+postgresql://airflow:airflow#airflow-dev-postgresql:5432/airflow
flower_host=0.0.0.0
flower_url_prefix=
flower_port=5555
flower_basic_auth=
sync_parallelism=0
celery_config_options=airflow.config_templates.default_celery.DEFAULT_CELERY_CONFIG
ssl_active=False
ssl_key=
ssl_cert=
ssl_cacert=
pool=prefork
operation_timeout=1.0
task_track_started=True
task_adoption_timeout=600
stalled_task_timeout=0
task_publish_max_retries=3
worker_precheck=False
[celery_broker_transport_options]
[dask]
cluster_address=127.0.0.1:8786
tls_ca=
tls_cert=
tls_key=
[scheduler]
job_heartbeat_sec=5
scheduler_heartbeat_sec=5
num_runs=-1
scheduler_idle_sleep_time=1
min_file_process_interval=30
deactivate_stale_dags_interval=60
dag_dir_list_interval=300
print_stats_interval=30
pool_metrics_interval=5.0
scheduler_health_check_threshold=30
orphaned_tasks_check_interval=300.0
child_process_log_directory=/opt/bitnami/airflow/logs/scheduler
scheduler_zombie_task_threshold=300
zombie_detection_interval=10.0
catchup_by_default=True
ignore_first_depends_on_past_by_default=True
max_tis_per_query=512
use_row_level_locking=True
max_dagruns_to_create_per_loop=10
max_dagruns_per_loop_to_schedule=20
schedule_after_task_execution=True
parsing_processes=2
file_parsing_sort_mode=modified_time
standalone_dag_processor=False
max_callbacks_per_loop=20
use_job_schedule=True
allow_trigger_in_future=False
dependency_detector=airflow.serialization.serialized_objects.DependencyDetector
trigger_timeout_check_interval=15
[triggerer]
default_capacity=1000
[kerberos]
ccache=/tmp/airflow_krb5_ccache
principal=airflow
reinit_frequency=3600
kinit_path=kinit
keytab=airflow.keytab
forwardable=True
include_ip=True
[github_enterprise]
api_rev=v3
[elasticsearch]
host=
log_id_template={dag_id}-{task_id}-{run_id}-{map_index}-{try_number}
end_of_log_mark=end_of_log
frontend=
write_stdout=False
json_format=False
json_fields=asctime, filename, lineno, levelname, message
host_field=host
offset_field=offset
[elasticsearch_configs]
use_ssl=False
verify_certs=True
[kubernetes]
pod_template_file=
worker_container_repository=
worker_container_tag=
namespace=default
delete_worker_pods=True
delete_worker_pods_on_failure=False
worker_pods_creation_batch_size=1
multi_namespace_mode=False
in_cluster=True
kube_client_request_args=
delete_option_kwargs=
enable_tcp_keepalive=True
tcp_keep_idle=120
tcp_keep_intvl=30
tcp_keep_cnt=6
verify_ssl=True
worker_pods_pending_timeout=300
worker_pods_pending_timeout_check_interval=120
worker_pods_queued_check_interval=60
worker_pods_pending_timeout_batch_size=100
[sensors]
default_timeout=604800
[smart_sensor]
use_smart_sensor=False
shard_code_upper_limit=10000
shards=5
sensors_enabled=NamedHivePartitionSensor
What the strange behaviour here?
I tied google its problem but I cant find some people with same strange airflow behaviour.
dagbag_import_timeout is how long the DagFileProcessor can try to import a dag python script before timing out. So in your case, DagFileProcessor fails when it exceeds the 30s trying to import the dag. For each task operation (schedule, queued, run, ...), Airflow try to parse the dag script which contains the task, so if you have multiple parallel tasks, and your dag script contains a code which takes much time to be executed (reading from external service or a loop,...), you may have this problem in your log.
I suggest increasing dagbag_import_timeout to 2-5 minutes, and increasing the time between 2 DagFileProcessorProcess by setting the conf min_file_process_interval.

How to visualize a clog2 file from MPI/MPE using jumpshot

I am following the examples given in the “Using MPI” book and am working on the example that turns on MPE logging (pmatmatlog.c - ported it from the Fortran example). It runs and produces a log file called “pmatmat.log.clog2”. I would like to visualize this log file.
I started by trying to use “jumpshot-4”, because that is what is installed and I cannot find a way to download jumpshot-2 (which appears to favor log files in the clog2 format?). Jumpshot-4, however, wants files in slog2 format and gives an error about not finding “clog2TOslog2” in the TAU directory tree.
Looking at the head of the clog2 file, it appears to be correct as far as I can tell:
$ head pmatmat.log.clog2
CLOG-02.44is_big_endian=TRUE is_finalzed=TRUE block_size=65536num_buffered_blocks=128max_comm_world_size=4max_thread_count=1known_eventID_start=0user_eventID_start=600known_solo_eventID_start=-10user_solo_eventID_start=5000known_stateID_count=300user_stateID_count=4known_solo_eventID_count=0user_solo_eventID_count=0commtable_fptr=0107374182466560>? … <and on into a lot of unreadable binary>
When I try to convert my file clog2 file using the command line calling “clog2TOslog2”, I get the following:
$ Clog2ToSlog2 ./pmatmat.log.clog2
GUI_LIBDIR is set. GUI_LIBDIR = /Users/markrbower/mpi/lib
**** Error! State!=State
**** Error! State!=State
**** Error! State!=State
**** Error! State!=State
java.lang.reflect.InvocationTargetException
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at logformat.clog2TOdrawable.InputLog$ContentIterator.hasNext(InputLog.java:481)
at logformat.clog2TOdrawable.InputLog.peekNextKind(InputLog.java:58)
at logformat.slog2.output.Clog2ToSlog2.main(Clog2ToSlog2.java:77)
Caused by: logformat.clog2TOdrawable.NoMatchingEventException: No matching State end-event for Record RecHeader[ time=1.9073486328125E-5, icomm=0, rank=1, thread=0, rectype=6 ], RecCargo[ etype=1, bytes=bstartbendcomputecomputedpma ]
at logformat.clog2TOdrawable.Topo_State.matchFinalEvent(Topo_State.java:95)
... 7 more
java.lang.reflect.InvocationTargetException
After a lot of errors that all look to be “InvocationTargetException” as above, the tail says:
…
SLOG-2 Header:
version = SLOG 2.0.6
NumOfChildrenPerNode = 2
TreeLeafByteSize = 65536
MaxTreeDepth = 0
MaxBufferByteSize = 1346
Categories is FBinfo(157 # 1454)
MethodDefs is FBinfo(0 # 0)
LineIDMaps is FBinfo(232 # 1611)
TreeRoot is FBinfo(1346 # 108)
TreeDir is FBinfo(38 # 1843)
Annotations is FBinfo(0 # 0)
Postamble is FBinfo(0 # 0)
Number of Drawables = 20
Number of Unmatched Events = 0
Total ByteSize of the logfile = 8320
timeElapsed between 1 & 2 = 13 msec
timeElapsed between 2 & 3 = 79 msec
$
There are several routes I could see to finally get to visualizing the log file:
1. re-install TAU and hope that allows jumpshot-4 to find the converter
2. install MPI2 with MPE2 and try the newer version of everything
3. find a way to download and install Jumpshot-2 and hope that reads clog2 files
4. find some other way to convert clog2 to slog2
Why isn’t the conversion working and which is the best option to pursue?

slurm:all cpus in a node are allocated by a job which just need a subset of cpus

I have every node configured as follow in slurm.conf
NodeName=node1 NodeAddr=xxx.xxx.xxx.xxx State=UNKNOWN Procs=32 Boards=1 SocketsPerBoard=2 CoresPerSocket=8 ThreadsPerCore=2 RealMemory=128000 TmpDisk=65536
when I run the following command
srun -n 2 sleep 60
I found that all the core in a node would be allocated by this job. If another job want to run on this node, it would be bolcked until the previous job finishes.
scontrol show the job information as following
JobId=51 JobName=sleep
UserId=hadoop(1002) GroupId=hadoop(1002) MCS_label=N/A
Priority=4294901703 Nice=0 Account=hadoop QOS=normal
JobState=RUNNING Reason=None Dependency=(null)
Requeue=1 Restarts=0 BatchFlag=0 Reboot=0 ExitCode=0:0
RunTime=00:00:12 TimeLimit=UNLIMITED TimeMin=N/A
SubmitTime=2018-07-16T21:46:56 EligibleTime=2018-07-16T21:46:56
StartTime=2018-07-16T21:46:56 EndTime=Unknown Deadline=N/A
PreemptTime=None SuspendTime=None SecsPreSuspend=0
LastSchedEval=2018-07-16T21:46:56
Partition=TOTAL AllocNode:Sid=node1:25124
ReqNodeList=(null) ExcNodeList=(null)
NodeList=xxx.xxx.xxx
BatchHost=xxx.xxx.xxx
NumNodes=1 NumCPUs=32 NumTasks=2 CPUs/Task=1 ReqB:S:C:T=0:0:*:*
TRES=cpu=32,mem=125G,node=1,billing=32
Socks/Node=* NtasksPerN:B:S:C=0:0:*:* CoreSpec=*
MinCPUsNode=1 MinMemoryNode=125G MinTmpDiskNode=0
Features=(null) DelayBoot=00:00:00
Gres=(null) Reservation=(null)
OverSubscribe=NO Contiguous=0 Licenses=(null) Network=(null)
Command=sleep
WorkDir=/home/hadoop
Power=
Use sacct to get the history jobs , I get the following output
JobID JobName Partition Account AllocCPUS State ExitCode
------------ ---------- ---------- ---------- ---------- ---------- --------
51 sleep TOTAL hadoop 32 COMPLETED 0:0
51.0 sleep hadoop 2 COMPLETED 0:0
show the partition information:
PartitionName=TOTAL
AllowGroups=ALL AllowAccounts=ALL AllowQos=ALL
AllocNodes=ALL Default=YES QoS=N/A
DefaultTime=NONE DisableRootJobs=NO ExclusiveUser=NO GraceTime=0
Hidden=NO
MaxNodes=UNLIMITED MaxTime=UNLIMITED MinNodes=1 LLN=NO
MaxCPUsPerNode=UNLIMITED
Nodes=xxxxxxx
PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=NO
OverTimeLimit=NONE PreemptMode=OFF
State=UP TotalCPUs=96 TotalNodes=3 SelectTypeParameters=NONE
DefMemPerNode=UNLIMITED MaxMemPerNode=UNLIMITED
It seems something wrong.
It's the problem casued by SelectType. I let it as the default value which I think is select/linear. As mentioned in Select Plugin Design Guide, select/linear is node-centric .
The select/linear and select/cons_res plugins have similar modes of operation. The obvious difference is that data structures in select/linear are node-centric, while those in select/cons_res contain information at a finer resolution (sockets, cores, threads, or CPUs depending upon the SelectTypeParameters configuration parameter).
I change SelectType to select/cons_res and restart the whole cluster, the problem is solved.

MariaDB + MaxScale Replication Error : The slave I/O thread stops because a fatal error is encountered when it tried to SELECT #master_binlog_checksum

I am trying to setup Real-time Data Streaming to Kafka with MaxScale CDC with MariaDB veriosn 10.0.32. After configuring replication, I am getting the status:
"The slave I/O thread stops because a fatal error is encountered when it tried to SELECT #master_binlog_checksum".
Below are all of my configurations:
MariaDB - Configuration
server-id = 1
#report_host = master1
#auto_increment_increment = 2
#auto_increment_offset = 1
log_bin = /var/log/mysql/mariadb-bin
log_bin_index = /var/log/mysql/mariadb-bin.index
binlog_format = row
binlog_row_image = full
# not fab for performance, but safer
#sync_binlog = 1
expire_logs_days = 10
max_binlog_size = 100M
# slaves
#relay_log = /var/log/mysql/relay-bin
#relay_log_index = /var/log/mysql/relay-bin.index
#relay_log_info_file = /var/log/mysql/relay-bin.info
#log_slave_updates
#read_only
MaxScale Configuration
[server1]
type=server
address=192.168.56.102
port=3306
protocol=MariaDBBackend
[Replication]
type=service
router=binlogrouter
version_string=10.0.27-log
user=myuser
passwd=mypwd
server_id=3
#binlogdir=/var/lib/maxscale
#mariadb10-compatibility=1
router_options=binlogdir=/var/lib/maxscale,mariadb10-compatibility=1
#slave_sql_verify_checksum=1
[Replication Listener]
type=listener
service=Replication
protocol=MySQLClient
port=5308
Starting Replication
CHANGE MASTER TO MASTER_HOST='192.168.56.102', MASTER_PORT=5308, MASTER_USER='myuser', MASTER_PASSWORD='mypwd', MASTER_LOG_POS=328, MASTER_LOG_FILE='mariadb-bin.000018';
START SLAVE;
Replication Status
Master_Host: 192.168.56.102
Master_User: myuser
Master_Port: 5308
Connect_Retry: 60
Master_Log_File: mariadb-bin.000018
Read_Master_Log_Pos: 328
Relay_Log_File: mysqld-relay-bin.000002
Relay_Log_Pos: 4
Relay_Master_Log_File: mariadb-bin.000018
**Slave_IO_Running: No**
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 328
Relay_Log_Space: 248
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: NULL
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 1593
Last_IO_Error: **The slave I/O thread stops because a fatal error is encountered when it tried to SELECT #master_binlog_checksum. Error:**
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 0
Master_SSL_Crl:
Master_SSL_Crlpath:
Using_Gtid: No
Gtid_IO_Pos:
The binlogrouter performs the following query to set the value of #master_binlog_checksum (real replication slaves perform the same query).
SET #master_binlog_checksum = ##global.binlog_checksum
Checking what the output of it is will probably explain why the replication won't start. Most likely the SET query failed which is why the latter SELECT #master_binlog_checksum query returns unexpected results.
In cases like these, it is recommended to open a bug report on the MariaDB Jira under the MaxScale project. This way the possibility of a bug is ruled out and if it turns out to be a configuration problem, the documentation can be updated to more clearly explain how to configure MaxScale.

PSI - Statusing Web Service - Results not as expected

I'm trying to update Status information on assignments via Statusing Web Service (PSI). Problem is, that the results are not as expected. I'll try to explain what I'm doing in detail:
Two cases:
1) An assignment for the resource exists on specified tasks. I want to report work actuals (update status).
2) There is no assignment for the resource on specified tasks. I want to create the assignment and report work actuals.
I have one task in my project (Auto scheduled, Fixed work). Resource availability of all resources is set to 100%. They all have the same calendar.
Name: Task 31 - Fixed Work
Duration: 12,5 days?
Start: Thu 14.03.13
Finish: Tue 02.04.13
Resource Names: Resource 1
Work: 100 hrs
First I execute an UpdateStatus with the following ChangeXML
<Changes xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<Proj ID="a8a601ce-f3ab-4c01-97ce-fecdad2359d9">
<Assn ID="d7273a28-c038-486b-b997-cdb2450ceef5" ResID="8a164257-7960-4b76-9506-ccd0efabdb72">
<Change PID="251658250">900000</Change>
</Assn>
</Proj>
</Changes>
Then I call a SubmitStatusForResource
client.SubmitStatusForResource(new Guid("8a164257-7960-4b76-9506-ccd0efabdb72"), null, "auto submit PSIStatusingGateway");
The following entry pops up in approval center (which is as I expected it):
Status Update; Task 31; Task update; Resource 1; 3/20/2012; 15h; 15%;
85h
Update in Project (still looks fine):
Task Name: Task 31 - Fixed Work
Duration: 12,5 days?
Start: Thu 14.03.13
Finish: Tue 02.04.13
Resource Names: Resource 1
Work: 100 hrs
Actual Work: 15 hrs
Remaining Work: 85 hrs
Then second case is executed: First I create a new assignment...
client.CreateNewAssignmentWithWork(
sName: Task 31 - Fixed Work,
projGuid: "a8a601ce-f3ab-4c01-97ce-fecdad2359d9",
taskGuid: "024d7b61-858b-40bb-ade3-009d7d821b3f",
assnGuid: "e3451938-36a5-4df3-87b1-0eb4b25a1dab",
sumTaskGuid: Guid.Empty,
dtStart: 14.03.2013 08:00:00,
dtFinish: 02.04.2013 15:36:00,
actWork: 900000,
fMilestone: false,
fAddToTimesheet: false,
fSubmit: false,
sComment: "auto commit...");
Then I call the UpdateStatus again:
<Changes xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<Proj ID="a8a601ce-f3ab-4c01-97ce-fecdad2359d9">
<Assn ID="e3451938-36a5-4df3-87b1-0eb4b25a1dab" ResID="c59ad8e2-7533-47bd-baa5-f5b03c3c43d6">
<Change PID="251658250">900000</Change>
</Assn>
</Proj>
</Changes>
And finally the SubmitStatusForResource again
client.SubmitStatusForResource(new Guid("c59ad8e2-7533-47bd-baa5-f5b03c3c43d6"), null, "auto submit PSIStatusingGateway");
This creates the following entry in approval center:
Status Update; Task 31 - Fixed Work; New reassignment request;
Resource 2; 3/20/2012; 15h; 100%; 0h
I accept it and update my project:
Name: Task 31 - Fixed Work
Duration: 6,76 days?
Start: Thu 14.03.13
Finish: Mon 25.03.13
Resource Names: Resource 1;Resource 2
Work: 69,05 hrs
Actual Work: 30 hrs
Remaining Work: 39,05 hrs
And I really don't get, why the new work would be 69,05 hours. The results I expected would have been:
Name: Task 31 - Fixed Work
Duration: 6,76 days?
Start: Thu 14.03.13
Finish: Mon 25.03.13
Resource Names: Resource 1;Resource 2
Work: 65 hrs
Actual Work: 30 hrs
Remaining Work: 35 hrs
I've spend quite a bunch of time, trying to find out, how to update the values to get the results that I want. I really would appreciate some help. This makes me want to rip my hair out!
Thanks in advance
PS: Forgot to say that I'm working with MS Project Server 2010 and MS Project Professional 2010

Resources