Trying to test artifactory-resource by running through the example pipeline.
groups:
- name: all
jobs:
- set-pipeline
- trigger-when-new-file-is-added-to-artifactory
jobs:
- name: set-pipeline
serial: true
plan:
- in_parallel:
- get: ea-terraform-module-aws-rds
trigger: true
- set_pipeline: deploying-rds-instance-from-jfrog-artifact
file: ea-terraform-module-aws-rds/examples/concourse/ea-terraform-module-aws-rds.yml
- name: trigger-when-new-file-is-added-to-artifactory
plan:
- get: ea-rds-jfrog-repo
- task: use-new-file
config:
platform: linux
image_resource:
type: docker-image
source:
repository: ubuntu
inputs:
- name: ea-rds-jfrog-repo
run:
path: cat
args:
- "./ea-rds-jfrog-repo/ea-terraform-module-aws-rds*.zip"
resource_types:
- name: artifactory
type: docker-image
source:
repository: pivotalservices/artifactory-resource
resources:
- name: ea-rds-jfrog-repo
type: artifactory
check_every: 1m
source:
endpoint: https://xxx.jfrog.io/artifactory
repository: "ea-terraform-module-aws-rds-1.4.0.zip"
regex: "ea-terraform-module-aws-rds-(?<version>.*).zip"
username: ${JF_USER}
password: ${JF_PASSWORD}
- name: ea-terraform-module-aws-rds
type: git
source:
private_key: ((github_private_key))
uri: git#github.com:xxx/xxx
branch: SAAS-27134
Concourse Error: pipeline path -> deploying-rds-isntance-from-jfrog-artifact/ea-rds-jfrog-repo
enter image description here
Repo on JFrog Artifactory
enter image description here
tried adding a version parameter
The following error indicates that the concourse resource is making a call to the Artifactory API, but instead of receiving a JSON structure, it gets a null response instead. The resource passes the null response to the jq utility that assumes an iterable object.
So why does it get a null response from the API?
It looks like, at minimum, the repository: in the ea-rds-jfrog-repo resource definition is incorrect.
Based on the second snapshot:
I'm going to guess it should be set to
repository: "/ea-terraform-module-aws-rds/terraform-module/aws"
I recommend using the https://github.com/spring-io/artifactory-resource which is in active development, instead of the unmaintained one from pivotalservices
I want to have Linux patches schedule and restart the minions if updates are success. I have created state for OS update process and sending event to event bus. Then I have created reactors to listen event tag and reboot server if tag success but reactor not react to anything and no action.
# /srv/salt/os/updates/linuxupdate.sls
uptodate:
pkg.uptodate:
- refresh: True
event-completed:
event.send:
- name: 'salt/minion-update/success'
- require:
- pkg: uptodate
event-failed:
event.send:
- name: 'salt/minion-update/failed'
- onfail:
- pkg: uptodate
# /etc/salt/master.d/reactor.conf
reactor:
- 'salt/minion-update/success':
- /srv/reactor/reboot.sls
# /srv/reactor/reboot.sls
reboot_server:
local.state.sls:
- tgt: {{ data['id'] }}
- arg:
- os.updates.reboot-server
I'm working on the observability part of Opensearch so I'm trying to collect the trace data of a wordpress website and send it to Opensearch.
I'm collecting the trace data using the wordpress plugin Decalog, this later sends the data to Jaeger agent, then from jaeger i'm sending the data to Opentelemetry and then to Data prepper and lastly to Opensearch.
Jaeger agent service in docker-compose :
jaeger-agent:
container_name: jaeger-agent
image: jaegertracing/jaeger-agent:latest
command: [ "--reporter.grpc.host-port=otel-collector:14250" ]
ports:
- "5775:5775/udp"
- "6831:6831/udp"
- "6832:6832/udp"
- "5778:5778/tcp"
networks:
- our-network
The "command" ligne got me this error : Err: connection error: desc = "transport: Error while dialing dial tcp: lookup otel-collector on 127.0.0.11:53: server misbehaving"","system":"grpc","grpc_log":true
So I changed otel-collector to the IP of the otel-collector container.
Otel collector and data prepper are installed using docker-compose.
data-prepper:
restart: unless-stopped
container_name: data-prepper
image: opensearchproject/data-prepper:latest
volumes:
- ./data-prepper/examples/trace_analytics_no_ssl.yml:/usr/share/data-prepper/pipelines.yaml
- ./data-prepper/examples/data-prepper-config.yaml:/usr/share/data-prepper/data-prepper-config.yaml
- ./data-prepper/examples/demo/root-ca.pem:/usr/share/data-prepper/root-ca.pem
ports:
- "21890:21890"
networks:
- our-network
depends_on:
- "opensearch"
otel-collector:
container_name: otel-collector
image: otel/opentelemetry-collector:0.54.0
command: [ "--config=/etc/otel-collector-config.yml" ]
working_dir: "/project"
volumes:
- ${PWD}/:/project
- ./otel-collector-config.yml:/etc/otel-collector-config.yml
- ./data-prepper/examples/demo/demo-data-prepper.crt:/etc/demo-data-prepper.crt
ports:
- "4317:4317"
depends_on:
- data-prepper
networks:
- our-network
The configuration of otel.yaml (to send data from opentelemetry to opensearch):
receivers:
jaeger:
protocols:
grpc:
exporters:
otlp/2:
endpoint: data-prepper:21890
tls:
insecure: true
insecure_skip_verify: true
logging:
service:
pipelines:
traces:
receivers: [jaeger]
exporters: [logging, otlp/2]
The configuration for data prepper pipeline : entry-pipeline:
delay: "100"
source:
otel_trace_source:
ssl: false
sink:
- pipeline:
name: "raw-pipeline"
- pipeline:
name: "service-map-pipeline"
raw-pipeline:
source:
pipeline:
name: "entry-pipeline"
prepper:
- otel_trace_raw_prepper:
sink:
- opensearch:
hosts: [ "http://localhost:9200" ]
cert: "/usr/share/data-prepper/root-ca.pem"
username: "admin"
password: "admin"
trace_analytics_raw: true
service-map-pipeline:
delay: "100"
source:
pipeline:
name: "entry-pipeline"
prepper:
- service_map_stateful:
sink:
- opensearch:
hosts: ["http://localhost:9200"]
cert: "/usr/share/data-prepper/root-ca.pem"
username: "admin"
password: "admin"
trace_analytics_service_map: true
As of now I'm getting the following errors:
Jaeger agent :
Err: connection error: desc = \"transport: Error while dialing dial tcp otel-collector-container-IP:14250: i/o timeout\"","system":"grpc","grpc_log":true}
Open telemetry collector :
2022-08-04T15:31:32.675Z info pipelines/pipelines.go:78 Exporter is starting... {"kind": "exporter", "data_type": "traces", "name": "logging"}
2022-08-04T15:31:32.675Z info pipelines/pipelines.go:82 Exporter started. {"kind": "exporter", "data_type": "traces", "name": "logging"}
2022-08-04T15:31:32.675Z info pipelines/pipelines.go:78 Exporter is starting... {"kind": "exporter", "data_type": "traces", "name": "otlp/2"}
2022-08-04T15:31:32.682Z info pipelines/pipelines.go:82 Exporter started. {"kind": "exporter", "data_type": "traces", "name": "otlp/2"}
2022-08-04T15:31:32.682Z info pipelines/pipelines.go:86 Starting processors...
2022-08-04T15:31:32.682Z info pipelines/pipelines.go:98 Starting receivers...
2022-08-04T15:31:32.682Z info pipelines/pipelines.go:102 Exporter is starting... {"kind": "receiver", "name": "jaeger", "pipeline": "traces"}
2022-08-04T15:31:32.683Z info static/strategy_store.go:203 No sampling strategies provided or URL is unavailable, using defaults {"kind": "receiver", "name": "jaeger", "pipeline": "traces"}
2022-08-04T15:31:32.683Z info pipelines/pipelines.go:106 Exporter started. {"kind": "receiver", "name": "jaeger", "pipeline": "traces"}
2022-08-04T15:31:32.683Z info service/collector.go:220 Starting otelcol... {"Version": "0.54.0", "NumCPU": 2}
2022-08-04T15:31:32.683Z info service/collector.go:128 Everything is ready. Begin running and processing data.
2022-08-04T15:31:32.684Z warn zapgrpc/zapgrpc.go:191 [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
"Addr": "data-prepper:21890",
"ServerName": "data-prepper:21890",
"Attributes": null,
"BalancerAttributes": null,
"Type": 0,
"Metadata": null
}. Err: connection error: desc = "transport: Error while dialing dial tcp data-prepper-container-ip:21890: connect: connection refused" {"grpc_log": true}
Data prepper :
Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.amazon.dataprepper.DataPrepper]: Constructor threw exception; nested exception is java.lang.RuntimeException: No valid pipeline is available for execution, exiting
Followed by this at the end :
WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
2022-08-04T15:23:22,803 [main] INFO com.amazon.dataprepper.parser.config.DataPrepperAppConfiguration - Command line args: /usr/share/data-prepper/pipelines.yaml,/usr/share/data-prepper/data-prepper-config.yaml
2022-08-04T15:23:22,806 [main] INFO com.amazon.dataprepper.parser.config.DataPrepperArgs - Using /usr/share/data-prepper/pipelines.yaml configuration file
Opensearch needs a separate tool to support ingestion of Opentelemetry data. It is called DataPrepper and is part of the Opensearch project. There is a nice getting started guide on how to set up trace analytics in Opensearch.
DataPrepper works similar as Fluentd or the Opentelemetry Collector, but has proper support for Opensearch as a data sink. It pre-processes trace data adequately for the Opensearch Dashboards UI tracing plugin. DataPrepper also supports the Opentelemetry metrics format.
Are you still having issues running Data Prepper? The configuration used in this example has been updated since the latest release, and should now be up to date and working (https://github.com/opensearch-project/data-prepper/blob/main/examples/trace_analytics_no_ssl.yml)
Trying to get a simplified version snowflake operator example to work, but triggering the DAG fails with error:
Task exited with return code Negsignal.SIGABRT
The dag only has the 1st task running the CREATE_TABLE_SQL_STRING. It will run successfully via test run: airflow dags test sf_example_short 2021-10-10
I can see the table is created in snowflake so connection appears fine and syntax must be okay.
But drop the table and trigger via airflow UI or via CLI:
airflow dags trigger sf_example_short it fails w/vague error:
Task exited with return code Negsignal.SIGABRT
googling I’ve found suggestions to change scheduler_health_check_threshold, or schedule_after_task_exectution, or default_impersonation, or OBJC_DISABLE_INITIALIZE_FORK_SAFETY, or killed_task_cleanup_time
but none of these fixed the issue
What am I missing? TIA!
Log excerpt:
[2021-10-11 10:03:57,256] {taskinstance.py:1114} INFO - Executing <Task(SnowflakeOperator): snowflake_cre_tbl> on 2021-10-11T15:01:09+00:00
[2021-10-11 10:03:57,261] {standard_task_runner.py:52} INFO - Started process 73291 to run task
[2021-10-11 10:03:57,271] {standard_task_runner.py:76} INFO - Running: ['airflow', 'tasks', 'run', 'sf_example_short', 'snowflake_cre_tbl', '2021-10-11T15:01:09+00:00', '--job-id', '222', '--pool', 'default_pool', '--raw', '--subdir', 'DAGS_FOLDER/sf_example_short.py', '--cfg-path', '/var/folders/jp/b35mp4dj4qn3y35k491hrpg80000gn/T/tmppt5q2b8h', '--error-file', '/var/folders/jp/b35mp4dj4qn3y35k491hrpg80000gn/T/tmp83zp78uz']
[2021-10-11 10:03:57,274] {standard_task_runner.py:77} INFO - Job 222: Subtask snowflake_cre_tbl
[2021-10-11 10:03:57,276] {cli_action_loggers.py:66} DEBUG - Calling callbacks: [<function default_action_log at 0x10e288940>]
[2021-10-11 10:03:57,286] {settings.py:208} DEBUG - Setting up DB connection pool (PID 73291)
[2021-10-11 10:03:57,287] {settings.py:244} DEBUG - settings.prepare_engine_args(): Using NullPool
[2021-10-11 10:03:57,289] {taskinstance.py:618} DEBUG - Refreshing TaskInstance <TaskInstance: sf_example_short.snowflake_cre_tbl 2021-10-11T15:01:09+00:00 [None]> from DB
[2021-10-11 10:03:57,298] {taskinstance.py:656} DEBUG - Refreshed TaskInstance <TaskInstance: sf_example_short.snowflake_cre_tbl 2021-10-11T15:01:09+00:00 [running]>
[2021-10-11 10:04:02,296] {taskinstance.py:618} DEBUG - Refreshing TaskInstance <TaskInstance: sf_example_short.snowflake_cre_tbl 2021-10-11T15:01:09+00:00 [running]> from DB
[2021-10-11 10:04:02,299] {taskinstance.py:656} DEBUG - Refreshed TaskInstance <TaskInstance: sf_example_short.snowflake_cre_tbl 2021-10-11T15:01:09+00:00 [running]>
[2021-10-11 10:04:02,305] {logging_mixin.py:109} INFO - Running <TaskInstance: sf_example_short.snowflake_cre_tbl 2021-10-11T15:01:09+00:00 [running]> on host xxxxxxs-MacBook-Pro.local
[2021-10-11 10:04:02,307] {taskinstance.py:618} DEBUG - Refreshing TaskInstance <TaskInstance: sf_example_short.snowflake_cre_tbl 2021-10-11T15:01:09+00:00 [running]> from DB
[2021-10-11 10:04:02,310] {taskinstance.py:656} DEBUG - Refreshed TaskInstance <TaskInstance: sf_example_short.snowflake_cre_tbl 2021-10-11T15:01:09+00:00 [running]>
[2021-10-11 10:04:07,302] {base_job.py:227} DEBUG - [heartbeat]
[2021-10-11 10:04:07,331] {taskinstance.py:684} DEBUG - Clearing XCom data
[2021-10-11 10:04:07,336] {taskinstance.py:691} DEBUG - XCom data cleared
[2021-10-11 10:04:07,350] {taskinstance.py:1251} INFO - Exporting the following env vars:
AIRFLOW_CTX_DAG_OWNER=airflow
AIRFLOW_CTX_DAG_ID=sf_example_short
AIRFLOW_CTX_TASK_ID=snowflake_cre_tbl
AIRFLOW_CTX_EXECUTION_DATE=2021-10-11T15:01:09+00:00
AIRFLOW_CTX_DAG_RUN_ID=manual__2021-10-11T15:01:09+00:00
[2021-10-11 10:04:07,351] {__init__.py:146} DEBUG - Preparing lineage inlets and outlets
[2021-10-11 10:04:07,351] {__init__.py:190} DEBUG - inlets: [], outlets: []
[2021-10-11 10:04:07,352] {snowflake.py:119} INFO - Executing: CREATE OR REPLACE TRANSIENT TABLE SF_SHORT_TEST (name VARCHAR(250), id INT);
[2021-10-11 10:04:07,356] {base.py:70} INFO - Using connection to: id: snowflake_conn. Host: https://***.snowflakecomputing.com/, Port: None, Schema: airflow1, Login: xxxxxxxxxxx, Password: ***, extra: {'account': '***', 'warehouse': 'DEMO_WH', 'database': 'AIRFLOW_SANDBOX', 'role': 'sysadmin'}
[2021-10-11 10:04:07,358] {connection.py:218} INFO - Snowflake Connector for Python Version: 2.4.1, Python Version: 3.8.12, Platform: macOS-10.15.7-x86_64-i386-64bit
[2021-10-11 10:04:07,359] {connection.py:421} DEBUG - connect
[2021-10-11 10:04:07,359] {connection.py:656} DEBUG - __config
[2021-10-11 10:04:07,359] {connection.py:773} INFO - This connection is in OCSP Fail Open Mode. TLS Certificates would be checked for validity and revocation status. Any other Certificate Revocation related exceptions or OCSP Responder failures would be disregarded in favor of connectivity.
[2021-10-11 10:04:07,359] {connection.py:789} INFO - Setting use_openssl_only mode to False
[2021-10-11 10:04:07,360] {converter.py:135} DEBUG - use_numpy: False
[2021-10-11 10:04:07,360] {connection.py:570} DEBUG - REST API object was created: ***.snowflakecomputing.com:443
[2021-10-11 10:04:07,361] {auth.py:129} DEBUG - authenticate
[2021-10-11 10:04:07,362] {auth.py:156} DEBUG - assertion content: *********
[2021-10-11 10:04:07,362] {auth.py:160} DEBUG - account=***, user=xxxxxxxxxxx, database=AIRFLOW_SANDBOX, schema=airflow1, warehouse=DEMO_WH, role=sysadmin, request_id=***
[2021-10-11 10:04:07,362] {auth.py:193} DEBUG - body['data']: {'CLIENT_APP_ID': 'PythonConnector', 'CLIENT_APP_VERSION': '2.4.1', 'SVN_REVISION': None, 'ACCOUNT_NAME': '***', 'LOGIN_NAME': 'xxxxxxxxxxx', 'CLIENT_ENVIRONMENT': {'APPLICATION': 'PythonConnector', 'OS': 'Darwin', 'OS_VERSION': 'macOS-10.15.7-x86_64-i386-64bit', 'PYTHON_VERSION': '3.8.12', 'PYTHON_RUNTIME': 'CPython', 'PYTHON_COMPILER': 'Clang 12.0.0 (clang-1200.0.32.29)', 'OCSP_MODE': 'FAIL_OPEN', 'TRACING': 10, 'LOGIN_TIMEOUT': 120, 'NETWORK_TIMEOUT': None}, 'SESSION_PARAMETERS': {'CLIENT_SESSION_KEEP_ALIVE_HEARTBEAT_FREQUENCY': 900, 'CLIENT_PREFETCH_THREADS': 4}}
[2021-10-11 10:04:07,363] {retry.py:230} DEBUG - Converted retries value: 1 -> Retry(total=1, connect=None, read=None, redirect=None, status=None)
[2021-10-11 10:04:07,364] {retry.py:230} DEBUG - Converted retries value: 1 -> Retry(total=1, connect=None, read=None, redirect=None, status=None)
[2021-10-11 10:04:07,364] {network.py:950} DEBUG - Active requests sessions: 1, idle: 0
[2021-10-11 10:04:07,365] {network.py:650} DEBUG - remaining request timeout: 120, retry cnt: 1
[2021-10-11 10:04:07,365] {network.py:638} DEBUG - Request guid: ***
[2021-10-11 10:04:07,366] {network.py:794} DEBUG - socket timeout: 60
[2021-10-11 10:04:07,405] {local_task_job.py:151} INFO - Task exited with return code Negsignal.SIGABRT
[2021-10-11 10:04:07,405] {taskinstance.py:618} DEBUG - Refreshing TaskInstance <TaskInstance: sf_example_short.snowflake_cre_tbl 2021-10-11T15:01:09+00:00 [running]> from DB
[2021-10-11 10:04:07,410] {taskinstance.py:656} DEBUG - Refreshed TaskInstance <TaskInstance: sf_example_short.snowflake_cre_tbl 2021-10-11T15:01:09+00:00 [running]>
[2021-10-11 10:04:07,411] {taskinstance.py:1867} DEBUG - Task Duration set to 10.174875
[2021-10-11 10:04:07,411] {taskinstance.py:1505} INFO - Marking task as FAILED. dag_id=sf_example_short, task_id=snowflake_cre_tbl, execution_date=20211011T150109, start_date=20211011T150357, end_date=20211011T150407
abbreviated system info:
Apache Airflow version | 2.1.3
executor | SequentialExecutor
task_logging_handler | airflow.utils.log.file_task_handler.FileTaskHandler
System info
OS | Mac OS
apache-airflow-providers-snowflake | 1.1.0
I am testing my meteor app's UI with some browser tests. I use http://webdriver.io and a selenium chrome https://hub.docker.com/r/selenium/standalone-chrome/ node.
I use the webdriver.io testrunner for tests and mocha as the test framework.
When I enter this block inside a jade template (by opening the corresponding page):
Template.boardBody.onRendered(function() {
let imagePath = new ReactiveVar('');
this.autorun(() => {
imagePath.set(Meteor.settings.public.backgroundPath[1]);
//document.getElementsByClassName('board-wrapper')[0].style.backgroundImage = "url('" + imagePath.get() + "')";
$('.board-wrapper').css('background-image', "url('" + path + "')");
});
}
The headless chrome crashes with this error:
{ Error: An unknown server-side error occurred while processing the command.
at BoardPage.open (tests/board.page.js:20:5)
at Context.<anonymous> (tests/board.test.js:22:17)
at Promise.F (node_modules/core-js/library/modules/_export.js:35:28)
at execute(<Function>) - at BoardPage.open (tests/page.js:11:13)
message: 'unknown error: session deleted because of page crash\nfrom tab crashed',
type: 'RuntimeError',
screenshot: 'Just a black page',
seleniumStack:
{ status: 13,
type: 'UnknownError',
message: 'An unknown server-side error occurred while processing the command.',
orgStatusMessage: 'unknown error: session deleted because of page crash\nfrom tab crashed\n (Session info: chrome=59.0.3071.115)\n (Driver info: chromedriver=2.30.477691 (6ee44a7247c639c0703f291d320bdf05c1531b57),platform=Linux 4.4.4-200.fc22.x86_64 x86_64) (WARNING: The server did not provide any stacktrace information)\nCommand duration or timeout: 4.13 seconds\nBuild info: version: \'3.4.0\', revision: \'unknown\', time: \'unknown\'\nSystem info: host: \'f362d8ab8951\', ip: \'172.17.0.1\', os.name: \'Linux\', os.arch: \'amd64\', os.version: \'4.4.4-200.fc22.x86_64\', java.version: \'1.8.0_131\'\nDriver info: org.openqa.selenium.chrome.ChromeDriver\nCapabilities [{applicationCacheEnabled=false, rotatable=false, mobileEmulationEnabled=false, networkConnectionEnabled=false, chrome={chromedriverVersion=2.30.477691 (6ee44a7247c639c0703f291d320bdf05c1531b57), userDataDir=/tmp/.org.chromium.Chromium.NUsUeZ}, takesHeapSnapshot=true, pageLoadStrategy=normal, databaseEnabled=false, handlesAlerts=true, hasTouchScreen=false, version=59.0.3071.115, platform=LINUX, browserConnectionEnabled=false, nativeEvents=true, acceptSslCerts=true, locationContextEnabled=true, webStorageEnabled=true, browserName=chrome, takesScreenshot=true, javascriptEnabled=true, cssSelectorsEnabled=true, unexpectedAlertBehaviour=}]\nSession ID: f1e261ec57fde3697e98945af051d236' },
shotTaken: true }
I use chai.expect for my assertion statements and i have a feeling that the promises are somehow messing up the headless chrome.
Anyone knows why this is happening?