Add a command keymap ATOM - atom-editor

I would like to add a command in the ATOM keymap.
ALT+j : Select all occurences
But I don't know how it's working, I've just add cmd+d, easily.
This is what I have right now :
'body':
'cmd-,': 'application:show-settings'
'cmd-N': 'application:new-window'
'cmd-W': 'window:close'
'cmd-o': 'application:open'
'cmd-T': 'pane:reopen-closed-item'
'cmd-n': 'application:new-file'
'cmd-s': 'core:save'
'cmd-S': 'core:save-as'
'cmd-alt-s': 'window:save-all'
'cmd-w': 'core:close'
'cmd-ctrl-f': 'window:toggle-full-screen'
'cmd-z': 'core:undo'
'cmd-y': 'core:redo'
'cmd-x': 'core:cut'
'cmd-c': 'core:copy'
'cmd-v': 'core:paste'
'shift-up': 'core:select-up'
'shift-down': 'core:select-down'
'shift-left': 'core:select-left'
'shift-right': 'core:select-right'
'shift-pageup': 'core:select-page-up'
'shift-pagedown': 'core:select-page-down'
'delete': 'core:delete'
'shift-delete': 'core:delete'
'pageup': 'core:page-up'
'pagedown': 'core:page-down'
'backspace': 'core:backspace'
'shift-backspace': 'core:backspace'
'cmd-up': 'core:move-to-top'
'cmd-down': 'core:move-to-bottom'
'cmd-shift-up': 'core:select-to-top'
'cmd-shift-down': 'core:select-to-bottom'
'cmd-{': 'pane:show-previous-item'
'cmd-}': 'pane:show-next-item'
'cmd-alt-left': 'pane:show-previous-item'
'atom-workspace atom-text-editor:not([mini])':
'cmd-d': 'editor:duplicate-lines'

This is what I found in my keymap.cson file directed by the Atom->Settings->Keybindings tab/page:
# Your keymap
#
# Atom keymaps work similarly to style sheets. Just as style sheets use
# selectors to apply styles to elements, Atom keymaps use selectors to associate
# keystrokes with events in specific contexts. Unlike style sheets however,
# each selector can only be declared once.
#
# You can create a new keybinding in this file by typing "key" and then hitting
# tab.
#
# Here's an example taken from Atom's built-in keymap:
#
# 'atom-text-editor':
# 'enter': 'editor:newline'
#
# 'atom-workspace':
# 'ctrl-shift-p': 'core:move-up'
# 'ctrl-p': 'core:move-down'
#
# You can find more information about keymaps in these guides:
# * http://flight-manual.atom.io/using-atom/sections/basic-customization/#_customizing_keybindings
# * http://flight-manual.atom.io/behind-atom/sections/keymaps-in-depth/
#
# If you're having trouble with your keybindings not working, try the
# Keybinding Resolver: `Cmd+.` on macOS and `Ctrl+.` on other platforms. See the
# Debugging Guide for more information:
# * http://flight-manual.atom.io/hacking-atom/sections/debugging/#check-the-keybindings
#
# This file uses CoffeeScript Object Notation (CSON).
# If you are unfamiliar with CSON, you can read more about it in the
# Atom Flight Manual:
# http://flight-manual.atom.io/using-atom/sections/basic-customization/#_cson
The links in here are very resourceful. Check these out --
http://flight-manual.atom.io/using-atom/sections/basic-customization/#_customizing_keybindings
http://flight-manual.atom.io/behind-atom/sections/keymaps-in-depth/
For cson details go to Atom flight manual's basic-customization part further.
Hope this gives you some direction on how to do it!

Related

Dagster: Multiple and Conditional Outputs (Type check failed for step output xxx PySparkDataFrame)

I'm executing the Dagster tutorial, and I got stuck at the Multiple and Conditional Outputs step.
In the solid definitions, it asks to declare (among other things):
output_defs=[
OutputDefinition(
name="hot_cereals", dagster_type=DataFrame, is_required=False
),
OutputDefinition(
name="cold_cereals", dagster_type=DataFrame, is_required=False
),
],
But there's no information where the DataFrame cames from.
Firstly I have tried with pandas.DataFrame but I faced the error: {dagster_type} is not a valid dagster type. It happens when I try to submit it via $ dagit -f multiple_outputs.py.
Then I installed the dagster_pyspark and gave a try with the dagster_pyspark.DataFrame. This time I managed to summit the DAG to the UI. However, when I run it from the UI, I got the following error:
dagster.core.errors.DagsterTypeCheckDidNotPass: Type check failed for step output hot_cereals of type PySparkDataFrame.
File "/Users/bambrozio/.local/share/virtualenvs/dagster-tutorial/lib/python3.7/site-packages/dagster/core/execution/plan/execute_plan.py", line 210, in _dagster_event_sequence_for_step
for step_event in check.generator(step_events):
File "/Users/bambrozio/.local/share/virtualenvs/dagster-tutorial/lib/python3.7/site-packages/dagster/core/execution/plan/execute_step.py", line 273, in core_dagster_event_sequence_for_step
for evt in _create_step_events_for_output(step_context, user_event):
File "/Users/bambrozio/.local/share/virtualenvs/dagster-tutorial/lib/python3.7/site-packages/dagster/core/execution/plan/execute_step.py", line 298, in _create_step_events_for_output
for output_event in _type_checked_step_output_event_sequence(step_context, output):
File "/Users/bambrozio/.local/share/virtualenvs/dagster-tutorial/lib/python3.7/site-packages/dagster/core/execution/plan/execute_step.py", line 221, in _type_checked_step_output_event_sequence
dagster_type=step_output.dagster_type,
Does anyone know how to fix it? Thanks for the help!
As Arthur pointed out, the full tutorial code is available on Dagster's github.
However, you do not need dagster_pandas, rather, the key lines missing from your code are:
if typing.TYPE_CHECKING:
DataFrame = list
else:
DataFrame = PythonObjectDagsterType(list, name="DataFrame") # type: Any
The reason for the above structure is to achieve MyPy compliance, see the Types & Expectations section of the tutorial.
See also the documentation on Dagster types.
I was stuck here, too, but luckily I found the updated source code.
They have updated the docs so that the OutputDefinition is defined beforehand.
Update your code before sorting and pipeline like below:
import csv
import os
from dagster import (
Bool,
Field,
Output,
OutputDefinition,
execute_pipeline,
pipeline,
solid,
)
#solid
def read_csv(context, csv_path):
lines = []
csv_path = os.path.join(os.path.dirname(__file__), csv_path)
with open(csv_path, "r") as fd:
for row in csv.DictReader(fd):
row["calories"] = int(row["calories"])
lines.append(row)
context.log.info("Read {n_lines} lines".format(n_lines=len(lines)))
return lines
#solid(
config_schema={
"process_hot": Field(Bool, is_required=False, default_value=True),
"process_cold": Field(Bool, is_required=False, default_value=True),
},
output_defs=[
OutputDefinition(name="hot_cereals", is_required=False),
OutputDefinition(name="cold_cereals", is_required=False),
],
)
def split_cereals(context, cereals):
if context.solid_config["process_hot"]:
hot_cereals = [cereal for cereal in cereals if cereal["type"] == "H"]
yield Output(hot_cereals, "hot_cereals")
if context.solid_config["process_cold"]:
cold_cereals = [cereal for cereal in cereals if cereal["type"] == "C"]
yield Output(cold_cereals, "cold_cereals")
You can also find the whole lines of codes from here.
Try first to install the dagster pandas integration:
pip install dagster_pandas
Then do:
from dagster_pandas import DataFrame
You can find the code from the tutorial here.

Self-referential values in an R config file

Using the config package, I'd like elements to reference other elements,
like how path_file_a references path_directory.
config.yml file in the working directory:
default:
path_directory : "data-public"
path_file_a : "{path_directory}/a.csv"
path_file_b : "{path_directory}/b.csv"
path_file_c : "{path_directory}/c.csv"
# recursive : !expr file.path(config::get("path_directory"), "c.csv")
Code:
config <- config::get()
config$path_file_a
# Returns: "{path_directory}/a.csv"
glue::glue(config$path_file_a, .envir = config)
# Returns: "data-public/a.csv"
I can use something like glue::glue() on the value returned by config$path_file_a.
But I'd prefer to have the value already substituted so config$path_file_a contains the actual value (not the template for the value).
As you might expect, uncommenting the recursive line creates an endless self-referential loop.
Are there better alternatives to glue::glue(config$path_file_a, .envir = config)?
I came across the same problem and I've written a wrapper around config and glue.
The package is called gonfig and has been submitted to CRAN.
With it you would have:
config.yml
default:
path_directory : "data-public"
path_file_a : "{path_directory}/a.csv"
path_file_b : "{path_directory}/b.csv"
path_file_c : "{path_directory}/c.csv"
And in your R script:
config <- gonfig::get()
config$path_file_c
#> "data-public/c.csv"

Change ossec(wazuh) agent profiles via saltstack

I'm trying to modify the <config-profile> section of a ossc.conf file, including a grains content.
something like:
ossec-profiles:
- profile1
- profile2
and I want to modify the section <config-profile> from
<config-profile>centos, centos7</config-profile>
to
<config-profile>centos, centos7, profile1, profile2</config-profile>
in the ossec.conf file
Any idea?
This can be done by using file.replace module which makes you able to change a text in a file based on a pattern. So in your case you can do the following:
You need to select the pattern as regex group so you can use it later as shown below
configure_ossec:
file.replace:
- name: /path/to/ossec.conf
- pattern: '((<config-profile>.*?)[^<]*)'
- repl: {{ '\\1, ' + pillar['ossec-profiles'] | join(', ') }}
Or you might use this pattern to match only whatever inside config-profile tags then you will be able to call it again in the repl parameter:
(?<=<config-profile>)(.*)(?=<\/config-profile>)
Note: As pillar['ossec-profiles'] should return a list of profiles
then you have to use the join filter in order to separate the values
with comma as a delimiter
And finally the output expected to be something like this:
Changes:
----------
diff:
---
+++
## -1 +1 ##
-<config-profile>centos, centos7</config-profile>
+<config-profile>centos, centos7, profile1, profile2</config-profile>

How to obtain files included in initial commit using git2r (libgit2)?

I am using the R package git2r to interface with libgit2. I would like to obtain the list of files that were updated in each commit, similar to the output from git log --stat or git log --name-only. However, I am unable to obtain the files that were included in the initial commit. Below I provide code to setup an example Git repository as well as my attempted solutions based on my research.
Reproducible example
The code below creates a temporary directory in /tmp, creates empty text files, and then commits each file separately.
# Create example Git repo
path <- tempfile("so-git2r-ex-")
dir.create(path)
setwd(path)
# Set the number of fake files
n_files <- 3
file.create(paste0("file", 1:n_files, ".txt"))
library("git2r")
repo <- init(".")
for (i in 1:n_files) {
add(repo, sprintf("file%d.txt", i))
commit(repo, sprintf("Added file %d", i))
}
Option 1 - compare diff of two trees
This SO post recommends you perform a diff comparing the tree object of the desired commit and its parent commit. This works well, except for the initial commit because there is no parent commit to compare it to.
get_files_from_diff <- function(c1, c2) {
# Obtain files updated in commit c1.
# c2 is the commit that preceded c1.
git_diff <- diff(tree(c1), tree(c2))
files <- sapply(git_diff#files, function(x) x#new_file)
return(files)
}
log <- commits(repo)
n <- length(log)
for (i in 1:n) {
print(i)
if (i == n) {
print("Unclear how to obtain list of files from initial commit.")
} else {
files <- get_files_from_diff(log[[i]], log[[i + 1]])
print(files)
}
}
Option 2 - Parse commit summary
This SO post suggests obtaining commit information like the files changed by parsing the commit summary. This gives very similar to git log --stat, but again the exception is the initial commit. It lists no files. Looking at the source code, the files in the commit summary are obtained via the same method above, which explains why no files are displayed for the initial commit (it has no parent commit).
for (i in 1:n) {
summary(log[[i]])
}
Update
This should be possible. The Git command diff-tree has a flag --root to compare the root commit to a NULL tree (source). From the man page:
--root
When --root is specified the initial commit will be shown as a
big creation event. This is equivalent to a diff against the
NULL tree.
Furthermore, the libgit2 library has the function git_diff_tree_to_tree, which accepts a NULL tree. Unfortunately, it is unclear to me if it is possible to pass a NULL tree to the git2r C function git2r_diff via the git2r diff method for git-tree objects. Is there a way to create a NULL tree object with git2r?
> tree()
Error in (function (classes, fdef, mtable) :
unable to find an inherited method for function ‘tree’ for signature ‘"missing"’
> tree(NULL)
Error in (function (classes, fdef, mtable) :
unable to find an inherited method for function ‘tree’ for signature ‘"NULL"’
I came up with a solution based on the insight from my colleague that you can obtain the files currently being tracked by inspecting the git_tree object. This shows all the files that have been tracked up to this point, but since the root commit is the first commit, this means these files had to be added in that commit.
The summary method prints the files, and this data frame can be captured using the as method.
summary(tree(log[[n]]))
# mode type sha name
# 1 100644 blob e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 file1.txt
as(tree(log[[n]]), "data.frame")
# mode type sha name
# 1 100644 blob e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 file1.txt
The function below obtains the files from the root commit. While it is not apparent in this small example, the main complication is that subdirectories are represented as trees, so you need to recursively search the tree to obtain all the filenames.
obtain_files_in_commit_root <- function(repo, commit) {
# Obtain the files in the root commit of a Git repository
stopifnot(class(repo) == "git_repository",
class(commit) == "git_commit",
length(parents(commit)) == 0)
entries <- as(tree(commit), "data.frame")
files <- character()
while (nrow(entries) > 0) {
if (entries$type[1] == "blob") {
# If the entry is a blob, i.e. file:
# - record the name of the file
# - remove the entry
files <- c(files, entries$name[1])
entries <- entries[-1, ]
} else if (entries$type[1] == "tree") {
# If the entry is a tree, i.e. subdirectory:
# - lookup the entries for this tree
# - add the subdirectory to the name so that path is correct
# - remove the entry from beginning and add new entries to end of
# data.frame
new_tree_df <- as(lookup(repo, entries$sha[1]), "data.frame")
new_tree_df$name <- file.path(entries$name[1], new_tree_df$name)
entries <- rbind(entries[-1, ], new_tree_df)
} else {
stop(sprintf("Unknown type %s found in commit %s",
entries$type[1], commit))
}
}
return(files)
}
obtain_files_in_commit_root(repo, log[[n]])
# [1] "file1.txt"

gperftools Error: substr outside of string at /usr/local/bin/pprof line 3618

as suggest by Dirk Eddelbuettel in this talk and this answer I tried to profile compiled R code using gperftools. Here is what I did.
I used Dirks profilingSmall.R as script that I want to profile. I repeat it here:
## R Extensions manual, section 3.2 'Profiling R for speed'
## 'N' reduced to 99 here
suppressMessages(library(MASS))
suppressMessages(library(boot))
storm.fm <- nls(Time ~ b*Viscosity/(Wt - c), stormer, start = c(b=29.401, c=2.2183))
st <- cbind(stormer, fit=fitted(storm.fm))
storm.bf <- function(rs, i) {
st$Time <- st$fit + rs[i]
tmp <- nls(Time ~ (b * Viscosity)/(Wt - c), st, start = coef(storm.fm))
tmp$m$getAllPars()
}
rs <- scale(resid(storm.fm), scale = FALSE) # remove the mean
Rprof("boot.out")
storm.boot <- boot(rs, storm.bf, R = 99) # pretty slow
Rprof(NULL)
To profile it I run the following script
LD_PRELOAD="/usr/lib/libprofiler.so.0"
\CPUPROFILE=sample.log \
Rscript profilingSmall.R
Then I tried to parse the log file using
pprof /usr/bin/R sample.log
This returned the following error
Using local file /usr/bin/R.
Using local file sample.log.
substr outside of string at /usr/local/bin/pprof line 3618.
Use of uninitialized value in string eq at /usr/local/bin/pprof line 3618.
substr outside of string at /usr/local/bin/pprof line 3620.
Use of uninitialized value in string eq at /usr/local/bin/pprof line 3620.
sample.log: header size >= 2**16
sample.log is empty. However, a bunch of sample.log_digit were created that contain information that looks reasonable.
I had the same problem, but realized my problem. I'd done:
export CPUPROFILE=test.prof
export LD_PRELOAD="/usr/local/lib/libprofiler.so"
testprog ...
pprof --web `which testprog` test.prof
If I stopped after running testprog the prof files wasn't empty but after pprof it was. pprof crashed with the substr error.
What I realized later was that by setting and exporting LD_PRELOAD that libprofiler.so was also loaded for pprof, overwriting test.prof.
You just need to ensure LD_PRELOAD is not set when you run pprof.
I'm using gperftools-2.5, and I also encountered the same problem:
[root#localhost ivrserver]# pprof --text ./IvrServer ivr.prof
Using local file ./IvrServer.
Using local file ivr.prof.
substr outside of string at /usr/local/bin/pprof line 3695.
Use of uninitialized value in string eq at /usr/local/bin/pprof line 3695.
substr outside of string at /usr/local/bin/pprof line 3697.
Use of uninitialized value in string eq at /usr/local/bin/pprof line 3697.
ivr.prof: header size >= 2**16
I found this is because the prof file (ivr.prof in my example) is empty.
everytime the profiler start and end, it will create a new prof file, you should use xxx.prof.0 xxx.prof.1 ... to get the right result

Resources