Here is the logging of the execution of arc diff:
$ arc diff --trace
ARGV '/Users/yangyan/Meican/arcanist/bin/../scripts/arcanist.php' 'diff' '--trace'
LOAD Loaded "phutil" from "/Users/yangyan/Meican/libphutil/src".
LOAD Loaded "arcanist" from "/Users/yangyan/Meican/arcanist/src".
Config: Reading user configuration file "/Users/yangyan/.arcrc"...
Config: Did not find system configuration at "/etc/arcconfig".
Working Copy: Reading .arcconfig from "/Users/yangyan/Meican/go/src/code.meican.com/diffusion/DEMETER/demeter.git/.arcconfig".
Working Copy: Path "/Users/yangyan/Meican/go/src/code.meican.com/diffusion/DEMETER/demeter.git" is part of `git` working copy "/Users/yangyan/Meican/go/src/code.meican.com/diffusion/DEMETER/demeter.git".
Working Copy: Project root is at "/Users/yangyan/Meican/go/src/code.meican.com/diffusion/DEMETER/demeter.git".
Config: Did not find local configuration at "/Users/yangyan/Meican/go/src/code.meican.com/diffusion/DEMETER/demeter.git/.git/arc/config".
>>> [0] <conduit> user.whoami() <bytes = 117>
>>> [1] <http> https://code.meican.com/api/user.whoami
<<< [1] <http> 636,235 us
<<< [0] <conduit> 636,726 us
[2016-04-01 06:33:53] EXCEPTION: (ConduitClientException) ERR-INVALID-SESSION: Session key is not present. at [<phutil>/src/conduit/ConduitFuture.php:58]
arcanist(head=master, ref.master=fcc11b3a2781), phutil(head=master, ref.master=3024f0a4908b)
#0 ConduitFuture::didReceiveResult(array) called at [<phutil>/src/future/FutureProxy.php:58]
#1 FutureProxy::getResult() called at [<phutil>/src/future/FutureProxy.php:35]
#2 FutureProxy::resolve() called at [<phutil>/src/conduit/ConduitClient.php:58]
#3 ConduitClient::callMethodSynchronous(string, array) called at [<arcanist>/src/workflow/ArcanistWorkflow.php:332]
#4 ArcanistWorkflow::authenticateConduit() called at [<arcanist>/scripts/arcanist.php:354]
How should I treat the following errors?
Config: Did not find system configuration at "/etc/arcconfig".
Config: Did not find local configuration at "/Users/yangyan/Meican/go/src/code.meican.com/diffusion/DEMETER/demeter.git/.git/arc/config".
Your error is actually ERR-INVALID-SESSION: Session key is not present
Have you set up a certificate yet? arc install-certificate
Related
I am currently running a Symfony 5 project in dev environment.
I would like to output requests logs (like 10:01:39 request.INFO Matched route "login_route") into a file.
I have the following config/packages/dev/monolog.yaml file:
monolog:
handlers:
main:
type: stream
path: "%kernel.logs_dir%/%kernel.environment%.log"
level: debug
channels: [event]
With the YAML above, it logs correctly into the file /tmp/dev-logs/dev.log when I execute bin/console cache:clear.
But, it does not log anything when I perform requests on the application, no matter if I set channels: [request] or channels: ~ or even no channel param at all.
How can I edit the settings of that monolog.yaml file in order to log request channel logs ?
I have found the answer! This is very specific to my configuration.
In fact, I have two Docker containers (that both mount the project directory as a volume) for development:
one for code edition (with a linter, syntax checker, specific vim configuration...etc...)
one to access the application through HTTP using PHP-FPM (the one that is used when I make HTTP requests on the app)
So, when I perform a bin/console cache:clear from the first container I use for development, it logs into the /tmp/dev-logs/dev.log file of that first container; but when I execute HTTP requests, it logs into the /tmp/dev-logs/dev.log file of that second container;
I was checking the first container file only while it was logging into the second container file instead. So, I was simply not checking the right file.
Everything works. :)
ERROR MESSAGE:
W: Missing encryption key to decrypt file with. Ask your team for your master key and write it to /app/config/master.key or put it in the ENV['RAILS_MASTER_KEY'].
when deploying my project on Platform.sh, the operation failed because of the lack of the decryption key. from my google search, I found that the decryption key.
My Ubuntu .bashrc
export RAILS_MASTER_KEY='ad5e30979672cdcc2dd4f4381704292a'
rails project configuration for PLATFORM.SH
.platform.app.yaml
# The name of this app. Must be unique within a project.
name: app
type: 'ruby:2.7'
# The size of the persistent disk of the application (in MB).
disk: 5120
mounts:
'web/uploads':
source: local
source_path: uploads
relationships:
postgresdatabase: 'dbpostgres:postgresql'
hooks:
build: |
gem install bundler:2.2.5
bundle install
RAILS_ENV=production bundle exec rake assets:precompile
deploy: |
RACK_ENV=production bundle exec rake db:migrate
web:
upstream:
socket_family: "unix"
commands:
start: "\"unicorn -l $SOCKET -E production config.ru\""
locations:
'/':
root: "\"public\""
passthru: true
expires: "24h"
allow: true
routes.yaml
# Each route describes how an incoming URL is going to be processed by Platform.sh.
"https://www.{default}/":
type: upstream
upstream: "app:http"
"https://{default}/":
type: redirect
to: "https://www.{default}/"
services.yaml
# The name given to the PostgreSQL service (lowercase alphanumeric only).
dbpostgres:
type: postgresql:13
# The disk attribute is the size of the persistent disk (in MB) allocated to the service.
disk: 5120
db:
type: postgresql:13
disk: 5120
configuration:
extensions:
- pgcrypto
- plpgsql
- uuid-ossp
environments/production.rb
config.require_master_key = true
I suspect that the master.key is not accessible during deployment, and I don't understand how to solve the problem.
From what I understand, your export is in your .bashrc on your local machine, so it won't be accessible when deploying on Platform.sh. (The logs you see in your terminal when building and deploying are streamed, this doesn't happen on your machine.)
You need to make the RAILS_MASTER_KEY accessible on Platform.sh. To do so, this variable needs to be declared in your project.
Given the nature of the variable, I would suggest to use the Platform CLI to create this variable.
If this variable should be accessible on all your environments, you can make it a project level variable.
$ platform variable:create --level project --sensitive true env:RAILS_MASTER_KEY <your_key>
If it should only be accessible for a specific environment, then you need an environment level variable:
$ platform variable:create --level environment --environment '<your_envrionment>' --inheritable false --sensitive true env:RAILS_MASTER_KEY '<your_key>'
The env: prefix in the variable names tells Platform.sh to expose the variable with the rest of the environment variables. More information about this in the variables prefix section of the environment variables documentation page.
You could do the same via the management console if you prefer to avoid the command line.
Environment variables can also be configured directly in your .platform.app.yaml file, as described here. Keep in mind that this file being versioned, you should not use this method for sensitive information, such as encryption keys, API keys, and other kind of secrets.
The RAILS_MASTER_KEY environment variable should now be accessible during your Platform.sh deployment.
Composer is failing a task due to it not being able to read a log file, it's complaining about incorrect encoding.
Here's the log that appears in the UI:
*** Unable to read remote log from gs://bucket/logs/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log
*** 'ascii' codec can't decode byte 0xc2 in position 6986: ordinal not in range(128)
*** Log file does not exist: /home/airflow/gcs/logs/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log
*** Fetching from: http://airflow-worker-68dc66c9db-x945n:8793/log/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log
*** Failed to fetch log file from worker. HTTPConnectionPool(host='airflow-worker-68dc66c9db-x945n', port=8793): Max retries exceeded with url: /log/campaign_exceptions_0_0_1/merge_campaign_exceptions/2019-08-03T10:00:00+00:00/1.log (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f1c9ff19d10>: Failed to establish a new connection: [Errno -2] Name or service not known',))
I try viewing the file in the google cloud console and it also throws an error:
Failed to load
Tracking Number: 8075820889980640204
But I am able to download the file via gsutil.
When I view the file, it seems to have text overriding other text.
I can't show the entire file but it looks like this:
--------------------------------------------------------------------------------
Starting attempt 1 of 1
--------------------------------------------------------------------------------
#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
[2019-08-04 10:01:23,313] {models.py:1569} INFO - Executing <Task(BigQueryOperator): merge_campaign_exceptions> on 2019-08-03T10:00:00+00:00#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
[2019-08-04 10:01:23,314] {base_task_runner.py:124} INFO - Running: ['bash', '-c', u'airflow run __campaign_exceptions_0_0_1 merge_campaign_exceptions 2019-08-03T10:00:00+00:00 --job_id 22767 --pool _bq_pool --raw -sd DAGS_FOLDER//-campaign-exceptions.py --cfg_path /tmp/tmpyBIVgT']#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
[2019-08-04 10:01:24,658] {base_task_runner.py:107} INFO - Job 22767: Subtask merge_campaign_exceptions [2019-08-04 10:01:24,658] {settings.py:176} INFO - setting.configure_orm(): Using pool settings. pool_size=5, pool_recycle=1800#-#{"task-id": "merge_campaign_exceptions", "execution-date": "2019-08-03T10:00:00+00:00", "workflow": "__campaign_exceptions_0_0_1"}
Where the #-#{} pieces seems to be "on top of" the typical log.
I faced the same problem. In my case the problem was that I removed the google_gcloud_default connection that was being used to retrieve the logs.
Check the configuration and look for the connection name.
[core]
remote_log_conn_id = google_cloud_default
Then check the credentials used for that connection name has the right permissions to access the GCS bucket.
I'm having a similar problem with viewing logs in GCP Cloud Composer. It doesn't appear to be preventing the failing DAG task from running though. What it looks like is a permissions error between the GKE and Storage Bucket where the log files are kept.
You can still view the logs by going into your cluster's storage bucket in the same directory as your /dags folder where you should also see a logs/ folder.
Your helm chart should setup global env:
- name: AIRFLOW_CONN_GOOGLE_CLOUD_DEFAULT
value: "google-cloud-platform://"
Then, you should deploy a Dockerfile with root account only (not airflow account), additionaly, you set up your helm uid, gid as:
uid: 50000 #airflow user
gid: 50000 #airflow group
Then upgrade helm chart with new config
*** Unable to read remote log from gs://bucket
1)Found the solution after assigning the roles to the service account
2)The SA key(json or txt) to be added and configured to the connection in the
remote_log_conn_id = google_cloud_default
3)restart the scheduler and webserver of the airflow
4)restart the dags on the airflow
you can find the logs on the GCS bucket where its configured
Having issues applying state files to minions on salt, they're just basic test ones, nothing complicated.
In my master config file I have the following file roots definition:
file_roots:
base:
- /srv/salt/
My /srv/salt/top.sls file looks like this:
base:
'*':
- vim
Then at /srv/salt/vim/init.sls I have the following:
vim:
pkg.installed
So, that should be applied to all minions when applied, so I run the following:
sudo salt '*' state.apply
I get the following output, and it's not applied, as it seems to not be detecting the top.sls file?
salt-master-1:
----------
ID: states
Function: no.None
Result: False
Comment: No Top file or master_tops data matches found.
Changes:
Summary for salt-master-1
------------
Succeeded: 0
Failed: 1
------------
Total states run: 1
Total run time: 0.000 ms
dev-docker-1:
----------
ID: states
Function: no.None
Result: False
Comment: No Top file or master_tops data matches found.
Changes:
Summary for dev-docker-1
------------
Succeeded: 0
Failed: 1
------------
Total states run: 1
Total run time: 0.000 ms
ERROR: Minions returned with non-zero exit code
If I look at the logs for the minion, dev-docker-1 nothing is logged as an error, all that I see is this.
2018-11-08 18:33:12,993 [salt.minion :1429][INFO ][4883] User sudo_salt Executing command state.apply with jid 20181108183312990343
2018-11-08 18:33:13,015 [salt.minion :1564][INFO ][5438] Starting a new job with PID 5438
2018-11-08 18:33:13,331 [salt.state :933 ][INFO ][5438] Loading fresh modules for state activity
2018-11-08 18:33:13,448 [salt.minion :1863][INFO ][5438] Returning information for job: 20181108183312990343
Any help greatly appreciated as I'm a bit lost as to why this isn't working . . .
Edit 1
I have enabled verbose logging on the minion, and I see the following, seems it can't see the top.sls file
[DEBUG ] Could not find file 'salt://top.sls' in saltenv 'base'
[DEBUG ] No contents loaded for saltenv 'base'
[DEBUG ] No contents found in top file. If this is not expected, verify that the 'file_roots' specified in 'etc/master' are accessible. The 'file_roots' configuration is: {u'base': []}
Ok so I worked this out, operator error.
I had enabled gitfs backend in the config file, which has overridden the default base file system, so I just needed to do.
fileserver_backend:
- gitfs
- base
Doh!
I have successfully been using saltstack for managing virtual and bare-metal Ubuntu-14.04-servers for about a year.
On master, I have the following /srv/salt/top.sls:
base:
'*':
- common
- users
- openvpn # openvpn-formula
- openvpn.config # openvpn-formula
- fail2ban # fail2ban-formula
- fail2ban.config # fail2ban-formula
- swapfile # swapfile-formula
- ntp # ntp-formula to set up and configure the ntp client or serv
In /etc/salt/master I have included the following:
gitfs_remotes:
- https://github.com/srbolle/openvpn-formula.git
- https://github.com/srbolle/postgres-formula.git
- https://github.com/srbolle/fail2ban-formula.git
- https://github.com/srbolle/ntp-formula.git
- https://github.com/srbolle/swapfile-formula.git
I have had no problems with saltstack-formulas until now, but when I recently included the "swapfile-formula" I get the following when running
salt '*' state.highstate:
servername:
Data failed to compile:
----------
No matching sls found for 'swapfile' in env 'base'
Also, I get the same error message when running:
salt 'servername' state.show_sls swapfile
servername:
- No matching sls found for 'swapfile' in env 'base'
When I run:
salt servername state.show_top
'swapfile' is listed. I have tried to clear the cache, restarted servers, recreated servers, used 'kwalify -m top.sls' to validate my top.sls-file. I have spent days on this error, and don't know how to further debug (logs don't show anything suspiciuos).
Thankful for any clues on how to proceed.