I would like to configure my gitlab project so that every maintainer can merge (after review) but nobody can push on master; only a bot (for release).
I'm using terraform to configure my gitlab, with something like this:
resource "gitlab_branch_protection" "BranchProtect" {
project = local.project_id
branch = "master"
push_access_level = "no one"
merge_access_level = "maintainer"
}
But with have a "premium" version and the terraform provider do not allow to add a user (goto: https://github.com/gitlabhq/terraform-provider-gitlab/issues/165 ).
So, what I like to do is doing some http request on the API to add the specific user.
So I'm doing it like this:
get the actual protection
delete the actual configuration
update the retrieved configuration with what I want
push the new configuration
BTW: I've not found how to just update the configuration... https://docs.gitlab.com/ee/api/protected_branches.html
TMP_FILE=$(mktemp)
http GET \
$GITLAB_URL/api/v4/projects/$pid/protected_branches \
PRIVATE-TOKEN:$GITLAB_TOKEN \
name=$BRANCH_NAME \
| \
jq \
--arg uid $USER_ID \
'.[0] | .push_access_levels |= . + [{user_id: ($uid | tonumber)}]' \
> $TMP_FILE
http DELETE \
"$GITLAB_URL/api/v4/projects/$pid/protected_branches/$BRANCH_NAME" \
PRIVATE-TOKEN:$GITLAB_TOKEN
http --verbose POST \
"$GITLAB_URL/api/v4/projects/$pid/protected_branches" \
PRIVATE-TOKEN:$GITLAB_TOKEN \
< $TMP_FILE
But my problem is that the resulting configuration is not what I expect, I've got something like this:
"push_access_levels": [
{
"access_level": 40,
"access_level_description": "Maintainers",
"group_id": null,
"user_id": null
}
],
How can I just update the branch protection to add a simple user ?
Ok like they say: RTFM !
But you need to delete the rule before adding the new configuration.
http \
DELETE \
"$GITLAB_URL/api/v4/projects/$pid/protected_branches/$BRANCH_NAME" \
PRIVATE-TOKEN:$GITLAB_TOKEN \
http \
POST \
$GITLAB_URL/api/v4/projects/$pid/protected_branches \
PRIVATE-TOKEN:$GITLAB_TOKEN \
name==${BRANCH_NAME} \
push_access_level==0 \
merge_access_level==40 \
unprotect_access_level==40 \
allowed_to_push[][user_id]==$USER_ID \
Related
I'm trying to initialize dynamodb table when creating a localstack container.
Consider following command:
awslocal dynamodb create-table \
--debug \
--table-name Journal \
--global-secondary-indexes 'IndexName=GetJournalRowsIndex, KeySchema=[{AttributeName=persistence-id, KeyType=HASH},{AttributeName=sequence-nr,KeyType=RANGE}], Projection={ProjectionType=ALL}, ProvisionedThroughput={ReadCapacityUnits=10,WriteCapacityUnits=10}' \
--global-secondary-indexes 'IndexName=TagsIndex, KeySchema=[{AttributeName=tags,KeyType=HASH}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=10,WriteCapacityUnits=10}' \
--key-schema \
AttributeName=pkey,KeyType=HASH \
AttributeName=skey,KeyType=RANGE \
--attribute-definitions \
AttributeName=persistence-id,AttributeType=S \
AttributeName=pkey,AttributeType=S \
AttributeName=skey,AttributeType=S \
AttributeName=sequence-nr,AttributeType=N \
AttributeName=tags,AttributeType=S \
--billing-mode PAY_PER_REQUEST
I'm getting the following error:
An error occurred (ValidationException) when calling the CreateTable operation: The number of attributes in key schema must match the number of attributesdefined in attribute definitions.
I'm using those in GSI so I wonder what am I doing wrong here?
I guess you can't specify global-secondary-indexes flag twice. Try the following
awslocal dynamodb create-table \
--debug \
--table-name Journal \
--global-secondary-indexes "[{\"IndexName\": \"GetJournalRowsIndex\", \"KeySchema\": [{\"AttributeName\": \"persistence-id\", \"KeyType\": \"HASH\"}, {\"AttributeName\": \"sequence-nr\", \"KeyType\": \"RANGE\"}], \"Projection\": {\"ProjectionType\": \"ALL\"}, \"ProvisionedThroughput\": {\"ReadCapacityUnits\": 1, \"WriteCapacityUnits\": 1}}, {\"IndexName\": \"TagsIndex\", \"KeySchema\": [{\"AttributeName\": \"tags\", \"KeyType\": \"HASH\"}], \"Projection\": {\"ProjectionType\": \"ALL\"}, \"ProvisionedThroughput\": {\"ReadCapacityUnits\": 1, \"WriteCapacityUnits\": 1}}]" \
--key-schema \
AttributeName=pkey,KeyType=HASH \
AttributeName=skey,KeyType=RANGE \
--attribute-definitions \
AttributeName=persistence-id,AttributeType=S \
AttributeName=pkey,AttributeType=S \
AttributeName=skey,AttributeType=S \
AttributeName=sequence-nr,AttributeType=N \
AttributeName=tags,AttributeType=S \
--billing-mode PAY_PER_REQUEST
I'm using Google Cloud DLP to inspect sensitive data in BigQuery. I wonder is it possible to inspect all tables within a dataset with one dlpJob? If so, how should I set the configs?
I tried to omit the BQ tableId field in config. But it will return http 400 error "table_id must be set". Does it mean that with one dlpJob, only one table can be inspected, and to scan multiple tables we need multiple dlpJobs? Or is there a way to scan multiple tables within the same dataset with some regex tricks?
At the moment, one job just scans one table. The team is working on that feature - in the meantime you can manually create jobs with a rough shell script like what I've put below which combines gcloud and the rest calls to the dlp api. You could probably do something a lot smoother with cloud functions.
Prerequisites:
1. Install gcloud. https://cloud.google.com/sdk/install
2. Run this script with the following arguments:
3.
1. The project_id to scan bigquery tables of.
2. The dataset id for the output table to store findings to.
3. The table id for the output table to store findings to.
4. A number that represents the percentage of rows to scan.
# Example:
# ./inspect_all_bq_tables.sh dlapi-test findings_daataset
# Reports a status of execution message to the log file and serial port
function report() {
local tag="${1}"
local message="${2}"
local timestamp="$(date +%s)000"
echo "${timestamp} - ${message}"
}
readonly -f report
# report_status_update
#
# Reports a status of execution message to the log file and serial port
function report_status_update() {
report "${MSGTAG_STATUS_UPDATE}" "STATUS=${1}"
}
readonly -f report_status_update
# create_job
#
# Creates a single dlp job for a given bigquery table.
function create_dlp_job {
local dataset_id="$1"
local table_id="$2"
local create_job_response=$(curl -s -H \
"Authorization: Bearer $(gcloud auth print-access-token)" \
-H "X-Goog-User-Project: $PROJECT_ID" \
-H "Content-Type: application/json" \
"$API_PATH/v2/projects/$PROJECT_ID/dlpJobs" \
--data '
{
"inspectJob":{
"storageConfig":{
"bigQueryOptions":{
"tableReference":{
"projectId":"'$PROJECT_ID'",
"datasetId":"'$dataset_id'",
"tableId":"'$table_id'"
},
"rowsLimitPercent": "'$PERCENTAGE'"
},
},
"inspectConfig":{
"infoTypes":[
{
"name":"ALL_BASIC"
}
],
"includeQuote":true,
"minLikelihood":"LIKELY"
},
"actions":[
{
"saveFindings":{
"outputConfig":{
"table":{
"projectId":"'$PROJECT_ID'",
"datasetId":"'$FINDINGS_DATASET_ID'",
"tableId":"'$FINDINGS_TABLE_ID'"
},
"outputSchema": "BASIC_COLUMNS"
}
}
},
{
"publishFindingsToCloudDataCatalog": {}
}
]
}
}')
if [[ $create_job_response != *"dlpJobs"* ]]; then
report_status_update "Error creating dlp job: $create_job_response"
exit 1
fi
local new_dlpjob_name=$(echo "$create_job_response" \
head -5 | grep -Po '"name": *\K"[^"]*"' | tr -d '"' | head -1)
report_status_update "DLP New Job: $new_dlpjob_name"
}
readonly -f create_dlp_job
# List the datasets for a given project. Once we have these we can list the
# tables within each one.
function create_jobs() {
# The grep pulls the dataset id. The td removes the quotation marks.
local list_datasets_response=$(curl -s -H \
"Authorization: Bearer $(gcloud auth print-access-token)" -H \
"Content-Type: application/json" \
"$BIGQUERY_PATH/projects/$PROJECT_ID/datasets")
if [[ $list_datasets_response != *"kind"* ]]; then
report_status_update "Error listing bigquery datasets: $list_datasets_response"
exit 1
fi
local dataset_ids=$(echo $list_datasets_response \
| grep -Po '"datasetId": *\K"[^"]*"' | tr -d '"')
# Each row will look like "datasetId", with the quotation marks
for dataset_id in ${dataset_ids}; do
report_status_update "Looking up tables for dataset $dataset_id"
local list_tables_response=$(curl -s -H \
"Authorization: Bearer $(gcloud auth print-access-token)" -H \
"Content-Type: application/json" \
"$BIGQUERY_PATH/projects/$PROJECT_ID/datasets/$dataset_id/tables")
if [[ $list_tables_response != *"kind"* ]]; then
report_status_update "Error listing bigquery tables: $list_tables_response"
exit 1
fi
local table_ids=$(echo "$list_tables_response" \
| grep -Po '"tableId": *\K"[^"]*"' | tr -d '"')
for table_id in ${table_ids}; do
report_status_update "Creating DLP job to inspect table $table_id"
create_dlp_job "$dataset_id" "$table_id"
done
done
}
readonly -f create_jobs
PROJECT_ID=$1
FINDINGS_DATASET_ID=$2
FINDINGS_TABLE_ID=$3
PERCENTAGE=$4
API_PATH="https://dlp.googleapis.com"
BIGQUERY_PATH="https://www.googleapis.com/bigquery/v2"
# Main
create_jobs
I want to create a cluster in DataProc with both Jupyter and DataLab installed (I understand they are very similar but team members have different preference). I can create cluster with any of them:
Cluster with Jupyter:
gcloud dataproc clusters create $DATAPROC_CLUSTER_NAME_JUPYTER \
--project $PROJECT \
--bucket $BUCKET \
--zone $ZONE \
--initialization-actions gs://dataproc-initialization-actions/connectors/connectors.sh,gs://dataproc-initialization-actions/jupyter/jupyter.sh \
--metadata gcs-connector-version=$GCS_CONNECTOR_VERSION \
--metadata bigquery-connector-version=$BQ_CONNECTOR_VERSION \
--metadata JUPYTER_PORT=$JUPYTER_PORT,JUPYTER_CONDA_PACKAGES=numpy:scipy:pandas:scikit-learn
Cluster with DataLab:
gcloud dataproc clusters create $DATAPROC_CLUSTER_NAME_DATALAB \
--project $PROJECT \
--bucket $BUCKET \
--zone $ZONE \
--master-boot-disk-size $MASTER_DISK_SIZE \
--worker-boot-disk-size $WORKER_DISK_SIZE \
--initialization-actions gs://dataproc-initialization-actions/connectors/connectors.sh,gs://dataproc-initialization-actions/datalab/datalab.sh \
--metadata gcs-connector-version=$GCS_CONNECTOR_VERSION \
--metadata bigquery-connector-version=$BQ_CONNECTOR_VERSION \
--scopes cloud-platform,bigquery
And both work well. However, when I try to create a cluster with both of them, it fails:
gcloud dataproc clusters create test \
--project $PROJECT \
--bucket $BUCKET \
--zone $ZONE \
--initialization-actions gs://dataproc-initialization-actions/connectors/connectors.sh,gs://dataproc-initialization-actions/datalab/datalab.sh,gs://dataproc-initialization-actions/jupyter/jupyter.sh \
--metadata gcs-connector-version=$GCS_CONNECTOR_VERSION \
--metadata bigquery-connector-version=$BQ_CONNECTOR_VERSION \
--metadata JUPYTER_PORT=$JUPYTER_PORT,JUPYTER_CONDA_PACKAGES=numpy:scipy:pandas:scikit-learn \
--scopes cloud-platform,bigquery
The error message are:
ERROR: (gcloud.dataproc.clusters.create) Operation [projects/abc/regions/global/operations/d34943dc-5bda-386f-af91-db6e0516e2c5] failed: Multiple Errors:
- Initialization action failed. Failed action 'gs://dataproc-initialization-actions/jupyter/jupyter.sh', see output in: gs://abc/google-cloud-dataproc-metainfo/266175ef-e595-4732-b351-335837a3f30e/test-m/dataproc-initialization-script-2_output
- Initialization action failed. Failed action 'gs://dataproc-initialization-actions/jupyter/jupyter.sh', see output in: gs://abc/google-cloud-dataproc-metainfo/266175ef-e595-4732-b351-335837a3f30e/test-w-0/dataproc-initialization-script-2_output
- Initialization action failed. Failed action 'gs://dataproc-initialization-actions/jupyter/jupyter.sh', see output in: gs://abc/google-cloud-dataproc-metainfo/266175ef-e595-4732-b351-335837a3f30e/test-w-1/dataproc-initialization-script-2_output.
The file in test-m looks like following:
++ /usr/share/google/get_metadata_value attributes/dataproc-role
+ readonly ROLE=Worker
+ ROLE=Worker
++ /usr/share/google/get_metadata_value attributes/INIT_ACTIONS_REPO
++ echo https://github.com/GoogleCloudPlatform/dataproc-initialization-actions.git
+ readonly INIT_ACTIONS_REPO=https://github.com/GoogleCloudPlatform/dataproc-initialization-actions.git
+ INIT_ACTIONS_REPO=https://github.com/GoogleCloudPlatform/dataproc-initialization-actions.git
++ /usr/share/google/get_metadata_value attributes/INIT_ACTIONS_BRANCH
++ echo master
+ readonly INIT_ACTIONS_BRANCH=master
+ INIT_ACTIONS_BRANCH=master
++ /usr/share/google/get_metadata_value attributes/JUPYTER_CONDA_CHANNELS
+ readonly JUPYTER_CONDA_CHANNELS=
+ JUPYTER_CONDA_CHANNELS=
++ /usr/share/google/get_metadata_value attributes/JUPYTER_CONDA_PACKAGES
+ readonly JUPYTER_CONDA_PACKAGES=numpy:scipy:pandas:scikit-learn
+ JUPYTER_CONDA_PACKAGES=numpy:scipy:pandas:scikit-learn
+ echo 'Cloning fresh dataproc-initialization-actions from repo https://github.com/GoogleCloudPlatform/dataproc-initialization-actions.git and branch master...'
Cloning fresh dataproc-initialization-actions from repo https://github.com/GoogleCloudPlatform/dataproc-initialization-actions.git and branch master...
+ git clone -b master --single-branch https://github.com/GoogleCloudPlatform/dataproc-initialization-actions.git
fatal: destination path 'dataproc-initialization-actions' already exists and is not an empty directory.
Looks like there is a clone step which prevents the installation from success. How can I solve this? Any suggestion is appreciated, thank you.
This appears to be a bug in the init actions where we can't git clone the repository twice. We will fix this.
In the mean time, you can try Jupyter optional component instead with datalab init action.
I've successfully built a Yocto image for the RPi2 following this tutorial. I decided to expand the QML demo and try some of the Qt quick styles (import QtQuick.Controls.Styles 1.4).
Here is the bb file for the image
# Pulled from a mix of different images:
include recipes-core/images/rpi-basic-image.bb
# This image is a little more full featured, and includes wifi
# support, provided you have a raspberrypi3
inherit linux-raspberrypi-base
SUMMARY = "The minimal image that can run Qt5 applications"
LICENSE = "MIT"
# depend on bcm2835, which will bring in rpi-config
DEPENDS += "bcm2835-bootfiles"
MY_TOOLS = " \
qtbase \
qtbase-dev \
qtbase-mkspecs \
qtbase-plugins \
qtbase-tools \
"
MY_PKGS = " \
qt3d \
qt3d-dev \
qt3d-mkspecs \
qtcharts \
qtcharts-dev \
qtcharts-mkspecs \
qtconnectivity-dev \
qtconnectivity-mkspecs \
qtquickcontrols2 \
qtquickcontrols2-dev \
qtquickcontrols2-mkspecs \
qtdeclarative \
qtdeclarative-dev \
qtdeclarative-mkspecs \
qtgraphicaleffects \
qtgraphicaleffects-dev \
"
MY_FEATURES = " \
linux-firmware-bcm43430 \
bluez5 \
i2c-tools \
python-smbus \
bridge-utils \
hostapd \
dhcp-server \
iptables \
wpa-supplicant \
"
DISTRO_FEATURES_append += " bluez5 bluetooth wifi"
IMAGE_INSTALL_append = " \
${MY_TOOLS} \
${MY_PKGS} \
${MY_FEATURES} \
basicquick \
"
# Qt >5.7 doesn't ship with fonts, so these need to be added explicitely
IMAGE_INSTALL_append = "\
ttf-dejavu-sans \
ttf-dejavu-sans-mono \
ttf-dejavu-sans-condensed \
ttf-dejavu-serif \
ttf-dejavu-serif-condensed \
ttf-dejavu-common \
"
and the bb file for the demo itself
SUMMARY = "Simple Qt5 Quick application"
SECTION = "examples"
LICENSE = "MIT"
LIC_FILES_CHKSUM = "file://${COMMON_LICENSE_DIR}/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302"
# I want to make sure these get installed too.
DEPENDS += "qtbase qtdeclarative qtquickcontrols2"
SRCREV = "${AUTOREV}"
# GitLab own repo
SRC_URI[sha256sum] = "f2dcc13cda523e9aa9e8e1db6752b3dfaf4b531bfc9bb8e272eb3bfc5974738a"
SRC_URI = "git://git#gitlab.com:/some-repo.git;protocol=ssh"
S = "${WORKDIR}/git"
require recipes-qt/qt5/qt5.inc
do_install() {
install -d ${D}${bindir}
install -m 0755 BasicQuick ${D}${bindir}
}
Upon execution I got the error
QQmlApplicationEngine failed to load component
qrc:/main.qml:24 Type Page2 unavailable
qrc:/Page2.qml:4 module "QtQuick.Controls.Styles" is not installed
with Page2 being an item I have defined and used inside main.qml. The demo runs without any issues on my PC (custom built Qt 5.9.1) but fails on the RPi2 due to the missing submodule.
Frankly I've never use this submodule before (my custom built Qt 5.9.1 has everything enabled) and I'm not sure what I need include (if meta-qt5 even provides it) in order to be able to use it on the Yocto system.
The problem is the mismatch of versions of the Qt Quick Controls package.
You use version 1:
import QtQuick.Controls.Styles 1.4
but build version 2:
MY_PKGS = " \
...
qtquickcontrols2 \
...
What you need include in your image is qtquickcontrols.
You need to install qtquickcontrols-qmlplugins.
Just add to build/local.conf
PACKAGECONFIG_append_pn-qtbase = " accessibility"
PACKAGECONFIG_append_pn-qtquickcontrols = " accessibility"
IMAGE_INSTALL_append = " qtdeclarative-qmlplugins qtquickcontrols-qmlplugins"
Here is origin manual
https://importgeek.wordpress.com/2018/07/17/module-qtquick-controls-is-not-installed/
I'm using Ag, vimproc, quickrun
I'm able to run .php asynchronously with this settings in my .vimrc
nnoremap <leader>r :QuickRun<CR>
let g:quickrun_config = get(g:, 'quickrun_config', {})
let g:quickrun_config = {
\ "_" : {
\ 'runner': 'vimproc',
\ 'runner/vimproc/updatetime': 60,
\ 'outputter/quickfix': ':quickrun-module-outputter/quickfix',
\ 'outputter/buffer/split': ':rightbelow 8sp',
\ },
\}
Does anyone know how can I run ag asynchronously?
Ag settings in .vimrc
if executable("ag")
let g:ctrlp_user_command = 'ag %s -i --nocolor --nogroup --hidden
\ --ignore .git
\ --ignore .svn
\ --ignore .hg
\ --ignore .DS_Store
\ --ignore "**/*.pyc"
\ -g ""'
endif
nnoremap <leader>a :Ag!<Space>
nnoremap <leader>aa :Ag! <C-r>=expand('<cword>')<CR>
nnoremap <leader>aaa :Ag! <C-r>=expand('<cword>')<CR><CR>
A generic solution to use asynchronous command with Vim is using a plugin called vim-dispatch, you can launch command in the background with :Start!
Alternatively, this is also a fork of vim called neovim, which is trying to address that issue. At the time of this post, it is not necessarily mature enough, but this is something to consider as well for the future.