I am using the standard workflow of Google to run the migration from the old dataset to the new dataset (Migration steps). I imputed the missing values, such as Property ID, BigQuery ID, etc. When I ran the bash script the following error occured?
Migrating mindfulness.com_mindfulness_ANDROID.app_events_20180515
--allow_large_results --append_table --batch --debug_mode --destination_table=analytics_171690789.events_20180515 --noflatten_results --nouse_legacy_sql --parameter=firebase_app_id::1:437512149764:android:0dfd4ab1e9926c7c --parameter=date::20180515 --parameter=platform::ANDROID#platform --project_id=mindfulness --use_gce_service_account
FATAL Flags positioning error: Flag '--project_id=mindfulness' appears after final command line argument. Please reposition the flag.
Run 'bq help' to get help.
On stackoverflow and Google I couldn't find a solution. Someone any idea how to solve this?
My migration.sh script (with small modifications to the IDs to stay anonymous)
# Analytics Property ID for the Project. Find this in Analytics Settings in Firebase
PROPERTY_ID=171230123
# Bigquery Export Project
BQ_PROJECT_ID="mindfulness" #(e.g., "firebase-public-project")
# Firebase App ID for the app.
FIREBASE_APP_ID="1:123412149764:android:0dfd4ab1e1234c7c" #(e.g., "1:300830567303:ios:
# Dataset to import from.
BQ_DATASET="com_mindfulness_ANDROID" #(e.g., "com_firebase_demo_IOS")
# Platform
PLATFORM="ANDROID"#"platform of the app. ANDROID or IOS"
# Date range for which you want to run migration, [START_DATE,END_DATE]
START_DATE=20180515
END_DATE=20180517
# Do not modify the script below, unless you know what you are doing :)
startdate=$(date -d"$START_DATE" +%Y%m%d) || exit -1
enddate=$(date -d"$END_DATE" +%Y%m%d) || exit -1
# Iterate through the dates.
DATE="$startdate"
while [ "$DATE" -le "$enddate" ]; do
# BQ table constructed from above params.
BQ_TABLE="$BQ_PROJECT_ID.$BQ_DATASET.app_events_$DATE"
echo "Migrating $BQ_TABLE"
cat migration_script.sql | sed -e "s/SCRIPT_GENERATED_TABLE_NAME
$BQ_TABLE/g" | bq query \
--debug_mode \
--allow_large_results \
--noflatten_results \
--use_legacy_sql=False \
--destination_table analytics_$PROPERTY_ID.events_$DATE \
--batch \
--append_table \
--parameter=firebase_app_id::$FIREBASE_APP_ID \
--parameter=date::$DATE \
--parameter=platform::$PLATFORM \
--project_id=$BQ_PROJECT_ID
temp=$(date -I -d "$DATE + 1 day")
DATE=$(date -d "$temp" +%Y%m%d)
done
exit
# END OF SCRIPT
If you look at the output of your script, it contains this bit of text, right before the flag that's out of order:
--parameter=platform::ANDROID#platform --project_id=mindfulness
I'm pretty sure you want your platform to be ANDROID, not ANDROID#platform.
I suspect you can fix this just by putting a space between the end of the string, and the inline comment. So you have something like this:
PLATFORM="ANDROID" #"platform of the app. ANDROID or IOS"
Although to be safe, you might want to remove the comments entirely at the end of each line.
Related
my zsh shell has a quite long start up time and if I hold done the enter key the terminal prompt lags behind significantly. So I looked out for solutions and profiling options and found this github issue. Further done one comment suggests to run the following commands:
zsh -xv 2>&1 | ts -i "%.s" > zsh_startup.log
followed by
sort --field-separator=' ' -r -k1 zsh_startup.log> sorted.log
which gives me the following output:
2.106722 ESC[?1hESC=ESC[?1lESC>l
1.388755 ESC[?1hESC=ESC[?1lESC>l
1.185498 ESC[?1hESC=ESC[?1lESC>exit
1.153527 ESC[?1hESC=ESC[?1lESC>ls -la
0.065941 +powerline_precmd:1> PS1=$'%{\C-[[38;5;15m%}%{\C-[[48;5;31m%} ~ %{\C-[[48;5;237m%}%{\C-[[38;5;31m%}%{\C-[[38;5;250m%}%{\C-[[48;5;237m%} Private %{\C-[[48;5;237m%}%{\C-[[38;5;244m%}%{\C-[[38;5;254m%}%{\C-[[48;5;237m%} tmp %{\C-[[48;5;238m%}%{\C-[[38;5;237m%}%{\C-[[38;5;39m%}%{\C-[[48;5;238m%} 1 %{\C-[[48;5;236m%}%{\C-[[38;5;238m%}%{\C-[[38;5;15m%}%{\C-[[48;5;236m%} %# %{\C-[[0m%}%{\C-[[38;5;236m%}%{\C-[[0m%} '
0.052396 +powerline_precmd:1> PS1=$'%{\C-[[38;5;15m%}%{\C-[[48;5;31m%} ~ %{\C-[[48;5;237m%}%{\C-[[38;5;31m%}%{\C-[[38;5;250m%}%{\C-[[48;5;237m%} Private %{\C-[[48;5;237m%}%{\C-[[38;5;244m%}%{\C-[[38;5;254m%}%{\C-[[48;5;237m%} tmp %{\C-[[48;5;238m%}%{\C-[[38;5;237m%}%{\C-[[38;5;39m%}%{\C-[[48;5;238m%} 1 %{\C-[[48;5;236m%}%{\C-[[38;5;238m%}%{\C-[[38;5;15m%}%{\C-[[48;5;236m%} %# %{\C-[[0m%}%{\C-[[38;5;236m%}%{\C-[[0m%} '
...
It looks like the first four commands (or whatever the lines represent) are taking up the majority of the time. I'm having a hard time understanding the output and would like to have some guidance in interpreting the first lines - no need for solving the underlying problem of having a slow shell :) just the interpretation of the first four lines.
The file itself is much longer and I just included the first few lines.
Notes:
I'm using oh-my-zsh with the following plugins
plugins=(
z
git
kubectl
zsh-syntax-highlighting
zsh-autosuggestions
)
I configured l to be an alias for l -la in my .zshrc
this is what happens when I keep pressing the enter key:
(when there are no gaps between prompts I had already released the enter key again)
I am using demo.sh provided in syntaxnet repository. If I give input with '\n' separation, it is taking 27.05 seconds for running 3000 lines of text but when I run each line individually, it is taking more than one hour.
It means loading the model takes over 2.5 seconds. If this step is separated and has been put on cash, it will make the whole pipeline faster.
Here is modified version of demo.sh:-
PARSER_EVAL=bazel-bin/syntaxnet/parser_eval
MODEL_DIR=syntaxnet/models/parsey_mcparseface
[[ "$1" == "--conll" ]] && INPUT_FORMAT=stdin-conll || INPUT_FORMAT=stdin
$PARSER_EVAL \
--input=$INPUT_FORMAT \
--output=stdout-conll \
--hidden_layer_sizes=64 \
--arg_prefix=brain_tagger \
--graph_builder=structured \
--task_context=$MODEL_DIR/context.pbtxt \
--model_path=$MODEL_DIR/tagger-params \
--slim_model \
--batch_size=1024 \
--alsologtostderr \
| \
$PARSER_EVAL \
--input=stdin-conll \
--output=stdout-conll \
--hidden_layer_sizes=512,512 \
--arg_prefix=brain_parser \
--graph_builder=structured \
--task_context=$MODEL_DIR/context.pbtxt \
--model_path=$MODEL_DIR/parser-params \
--slim_model \
--batch_size=1024 \
--alsologtostderr \
I want to build a function call that will take input sentence and give output with dependency parser stored on to local variable like below( the below code is just to make the question clear )
dependency_parsing_model = ...
def give_dependency_parser(sentence,model=dependency_parsing_model):
...
#logic here
...
return dependency_parsing_output
In the above, model is stored in to a variable, so it takes lesser time for running each line on function call.
How to do this ?
The current version of syntaxnet's Parsey McParseface has two limitations which you've run across:
Sentences are read from stdin or a file, not from a variable
The model is in two parts and not a single executable
I have a branch of tensorflow/models:
https://github.com/dmansfield/models/tree/documents-from-tensor
which I'm working with the maintainers to get merged. With this branch of the code you can build the entire model in one graph (using a new python script called parsey_mcparseface.py) and feed sentences in with a tensor (i.e. a python variable).
Not the best answer in the world I'm afraid because it's very much in flux. There's no simple recipe for getting this working at the moment.
This image shows some hard drive IDs, they look pretty standardized (acquired from a web GUI which gathered the data from the command prompt on CentOS).
Are these drive IDs standardized and how can I parse the data out (of any set of hard drives on the market). i.e. I want to end up with the following variables (would regex work for any drive on the market?):
type=scsi
type2=SATA
MFR=WDC
model=WDC_WD1001FALS
serial=WD-WCATR6632234
Is this apparent order truly standardized across all mfrs and how do I parse it?
The pattern you see comes from a .rules file on your computer, something like "60-persistent-storage.rules":
# by-id (hardware serial number)
KERNEL=="hd*[!0-9]", ENV{ID_SERIAL}=="?*", \
SYMLINK+="disk/by-id/ata-$env{ID_SERIAL}"
KERNEL=="hd*[0-9]", ENV{ID_SERIAL}=="?*", \
SYMLINK+="disk/by-id/ata-$env{ID_SERIAL}-part%n"
KERNEL=="sd*[!0-9]", ENV{ID_SCSI_COMPAT}=="?*", \
SYMLINK+="disk/by-id/scsi-$env{ID_SCSI_COMPAT}"
KERNEL=="sd*[0-9]", ENV{ID_SCSI_COMPAT}=="?*", \
SYMLINK+="disk/by-id/scsi-$env{ID_SCSI_COMPAT}-part%n"
ENV{DEVTYPE}=="disk", ENV{ID_BUS}=="?*", ENV{ID_SERIAL}=="?*", \
SYMLINK+="disk/by-id/$env{ID_BUS}-$env{ID_SERIAL}"
ENV{DEVTYPE}=="partition", ENV{ID_BUS}=="?*", ENV{ID_SERIAL}=="?*", \
SYMLINK+="disk/by-id/$env{ID_BUS}-$env{ID_SERIAL}-part%n"
ENV{DEVTYPE}=="disk", ENV{ID_EDD}=="?*", \
SYMLINK+="disk/by-id/edd-$env{ID_EDD}"
ENV{DEVTYPE}=="partition", ENV{ID_EDD}=="?*", \
SYMLINK+="disk/by-id/edd-$env{ID_EDD}-part%n"
ENV{DEVTYPE}=="disk", ENV{ID_WWN_WITH_EXTENSION}=="?*", \
SYMLINK+="disk/by-id/wwn-$env{ID_WWN_WITH_EXTENSION}"
ENV{DEVTYPE}=="partition", ENV{ID_WWN_WITH_EXTENSION}=="?*", \
SYMLINK+="disk/by-id/wwn-$env{ID_WWN_WITH_EXTENSION}-part%n"
These rules can be changed.
Note that your strings are SCSI, and the rules for SCSI IDs follow these rules (although I'm not sure how exactly this works).
I have written a command-line tool that uses sub-commands much like Mercurial, Git, Subversion &c., in that its general usage is:
>myapp [OPTS] SUBCOMMAND [SUBCOMMAND-OPTS] [ARGS]
E.g.
>myapp --verbose speak --voice=samantha --quickly "hello there"
I'm now in the process of building Zsh completion for it but have quickly found out that it is a very complex beast. I have had a look at the _hg and _git completions but they are very complex and different in approach (I struggle to understand them), but both seem to handle each sub-command separately.
Does anyone know if there a way using the built in functions (_arguments, _values, pick_variant &c.) to handle the concept of sub-commands correctly, including handling general options and sub-command specific options appropriately? Or would the best approach be to manually handle the general options and sub-command?
A noddy example would be very much appreciated.
Many thanks.
Writing completion scripts for zsh can be quite difficult. Your best bet is to use an existing one as a guide. The one for Git is way too much for a beginner. You can use this repo:
https://github.com/zsh-users/zsh-completions
As for your question, you have use the concept of state. You define your subcommands in a list and then identify via $state which command you are in. Then you define the options for each command. You can see this in the completion script for play. A simplified version is below:
_play() {
local ret=1
_arguments -C \
'1: :_play_cmds' \
'*::arg:->args' \
&& ret=0
case $state in
(args)
case $line[1] in
(build-module|list-modules|lm|check|id)
_message 'no more arguments' && ret=0
;;
(dependencies|deps)
_arguments \
'1:: :_play_apps' \
'(--debug)--debug[Debug mode (even more informations logged than in verbose mode)]' \
'(--jpda)--jpda[Listen for JPDA connection. The process will suspended until a client is plugged to the JPDA port.]' \
'(--sync)--sync[Keep lib/ and modules/ directory synced. Delete unknow dependencies.]' \
'(--verbose)--verbose[Verbose Mode]' \
&& ret=0
;;
esac
esac
(If you are going to paste this, use the original source, as this won't work).
It looks daunting, but the general idea is not that complicated:
The subcommand comes first (_play_cmds is a list of subcommands with a description for each one).
Then come the arguments. The arguments are built based on which subcommand you are choosing. Note that you can group multiple subcommands if they share arguments.
With man zshcompsys, you can find more info about the whole system, although it is somewhat dense.
I found a technique that works well and is easy to understand. Basically, you create a new completion function for each subcommand and call it from the top-level completion function. Here's an example with dolt, showing how dolt completes to dolt table and dolt table completes to dolt table import, which then completes with a set of flags:
_dolt() {
local line state
_arguments -C \
"1: :->cmds" \
"*::arg:->args"
case "$state" in
cmds)
_values "dolt command" \
"table[Commands for copying, renaming, deleting, and exporting tables.]" \
;;
args)
case $line[1] in
table)
_dolt_table
;;
esac
;;
esac
}
_dolt_table() {
local line state
_arguments -C \
"1: :->cmds" \
"*::arg:->args"
case "$state" in
cmds)
_values "dolt_table command" \
"import[Creates, overwrites, replaces, or updates a table from the data in a file.]" \
;;
args)
case $line[1] in
import)
_dolt_table_import
;;
esac
;;
esac
}
_dolt_table_import() {
_arguments -s \
{-c,--create-table}'[Create a new table, or overwrite an existing table (with the -f flag) from the imported data.]' \
{-u,--update-table}'[Update an existing table with the imported data.]' \
{-f,--force}'[If a create operation is being executed, data already exists in the destination, the force flag will allow the target to be overwritten.]' \
{-r,--replace-table}'[Replace existing table with imported data while preserving the original schema.]' \
'(--continue)--continue[Continue importing when row import errors are encountered.]' \
{-s,--schema}'[The schema for the output data.]' \
{-m,--map}'[A file that lays out how fields should be mapped from input data to output data.]' \
{-pk,--pk}'[Explicitly define the name of the field in the schema which should be used as the primary key.]' \
'(--file-type)--file-type[Explicitly define the type of the file if it can''t be inferred from the file extension.]' \
'(--delim)--delim[Specify a delimiter for a csv style file with a non-comma delimiter.]'
}
I wrote a full guide here:
https://www.dolthub.com/blog/2021-11-15-zsh-completions-with-subcommands/
I am running a script on a solaris Box. specifically SunOS 5.7. I am not root. I am trying to execute a script similar to the following:
newgrp thegroup <<
FOO
source .login_stuff
echo "hello world"
FOO
The Script runs. The problem is it returns back to the calling process which puts me in the old group with the source .login_stuff not being sourced. I understand this behavior. What I am looking for is a way to stay in the sub shell. Now I know I could put an xterm& (see below) in the script and that would do it, but having a new xterm is undesirable.
Passing your current pid as a parameter.
newgrp thegroup <<
FOO
source .login_stuff
xterm&
echo $1
kill -9 $1
FOO
I do not have sg available.
Also, newgrp is necessary.
The following works nicely; put the following bit at the top of the (Bourne or Bash) script:
### first become another group
group=admin
if [ $(id -gn) != $group ]; then
exec sg $group "$0 $*"
fi
### now continue with rest of the script
This works fine on Linuxen. One caveat: arguments containing spaces are broken apart. I suggest you use the
env arg1='value 1' arg2='value 2' script.sh construct to pass them in (I couldn't get it to work with $# for some reason)
The newgrp command can only meaningfully be used from an interactive shell, AFAICT. In fact, I gave up on it about ... well, let's say long enough ago that the replacement I wrote is now eligible to vote in both the UK and the USA.
Note that newgrp is a special command 'built into' the shell. Strictly, it is a command that is external to the shell, but the shell has built-in knowledge about how to handle it. The shell actually exec's the program, so you get a new shell immediately afterwards. It is also a setuid root program. On Solaris, at least, newgrp also seems to ignore the SHELL environment variable.
I have a variety of programs that work around the issue that newgrp was intended to address. Remember, the command pre-dates the ability of users to belong to multiple groups at once (see the Version 7 Unix Manuals). Since newgrp does not provide a mechanism to execute commands after it executes, unlike su or sudo, I wrote a program newgid which, like newgrp, is a setuid root program and allows you to switch from one group to another. It is fairly simple - just main() plus a set of standardized error reporting functions used. Contact me (first dot last at gmail dot com) for the source. I also have a much more dangerous command called 'asroot' that allows me (but only me - under the default compilation) to tweak user and group lists much more thoroughly.
asroot: Configured for use by jleffler only
Usage: asroot [-hnpxzV] [<uid controls>] [<gid controls>] [-m umask] [--] command [arguments]
<uid controls> = [-u usr|-U uid] [-s euser|-S euid][-i user]
<gid controls> = [-C] [-g grp|-G gid] [-a grp][-A gid] [-r egrp|-R egid]
Use -h for more help
Option summary:
-a group Add auxilliary group (by name)
-A gid Add auxilliary group (by number)
-C Cancel all auxilliary groups
-g group Run with specified real GID (by name)
-G gid Run with specified real GID (by number)
-h Print this message and exit
-i Initialize UID and GIDs as if for user (by name or number)
-m umask Set umask to given value
-n Do not run program
-p Print privileges to be set
-r euser Run with specified effective UID (by name)
-R euid Run with specified effective UID (by number)
-s egroup Run with specified effective GID (by name)
-S egid Run with specified effective GID (by number)
-u user Run with specified real UID (by name)
-U uid Run with specified real UID (by number)
-V Print version and exit
-x Trace commands that are executed
-z Do not verify the UID/GID numbers
Mnemonic for effective UID/GID:
s is second letter of user;
r is second letter of group
(This program grew: were I redoing it from scratch, I would accept user ID or user name without requiring different option letters; ditto for group ID or group name.)
It can be tricky to get permission to install setuid root programs. There are some workarounds available now because of the multi-group facilities. One technique that may work is to set the setgid bit on the directories where you want the files created. This means that regardless of who creates the file, the file will belong to the group that owns the directory. This often achieves the effect you need - though I know of few people who consistently use this.
newgrp adm << ANYNAME
# You can do more lines than just this.
echo This is running as group \$(id -gn)
ANYNAME
..will output:
This is running as group adm
Be careful -- Make sure you escape the '$' with a slash. The interactions are a little strange, because it expands even single-quotes before it executes the shell as the other group. so, if your primary group is 'users', and the group you're trying to use is 'adm', then:
newgrp adm << END
# You can do more lines than just this.
echo 'This is running as group $(id -gn)'
END
..will output:
This is running as group users
..because 'id -gn' was run by the current shell, then sent to the one running as adm.
Anyways, I know this post is ancient, but hope this is useful to someone.
This example was expanded from plinjzaad's answer; it handles a command line which contains quoted parameters that contain spaces.
#!/bin/bash
group=wg-sierra-admin
if [ $(id -gn) != $group ]
then
# Construct an array which quotes all the command-line parameters.
arr=("${#/#/\"}")
arr=("${arr[*]/%/\"}")
exec sg $group "$0 ${arr[#]}"
fi
### now continue with rest of the script
# This is a simple test to show that it works.
echo "group: $(id -gn)"
# Show all command line parameters.
for i in $(seq 1 $#)
do
eval echo "$i:\${$i}"
done
I used this to demonstrate that it works.
% ./sg.test 'a b' 'c d e' f 'g h' 'i j k' 'l m' 'n o' p q r s t 'u v' 'w x y z'
group: wg-sierra-admin
1:a b
2:c d e
3:f
4:g h
5:i j k
6:l m
7:n o
8:p
9:q
10:r
11:s
12:t
13:u v
14:w x y z
Maybe
exec $SHELL
would do the trick?
You could use sh& (or whatever shell you want to use) instead of xterm&
Or you could also look into using an alias (if your shell supports this) so that you would stay in the context of the current shell.
In a script file eg tst.ksh:
#! /bin/ksh
/bin/ksh -c "newgrp thegroup"
At the command line:
>> groups fred
oldgroup
>> tst.ksh
>> groups fred
thegroup
sudo su - [user-name] -c exit;
Should do the trick :)