I've been working with an API call to structure it in JSON format so I might later push it into a database. Then code looks like this:
getPage() {
curl --fail -X GET 'https://api.app.com/v1/test?page=1&pageSize=1000&sort=desc' \
-H 'Authorization: Bearer 123abc456pickupsticks789' \
-H 'cache-control: no-cache'
}
getPage \
| jq -c '.items | .[] | {landing_id: .landing_id, submitted_at: .submitted_at, answers: .answers, email: .hidden.email}' \
> testpush.json
When I run it though, it produces this error: jq: error (at <stdin>:0): Cannot iterate over null (null)
I've looked at solutions such as this one, or this one from this site, and this response.
The common solution seemed to be using a ? in front of [] and I tried it in the jq line towards the bottom, but it still does not work. It just produces an empty json file.
Am I misreading the takeaway from those other answers and not putting my ? in the right place?>
To protect against the possibility that .items is not an array, you could write:
.items | .[]?
or even more robustly:
try .items[]
which is equivalent to (.items[])?.
In summary:
try E is equivalent to try E catch empty
try E is equivalent to (E)?
(Note that the expressions .items[]? and (.items[])? are not identical.)
However none of these will provide protection against input that is invalid JSON.
p.s. In future, please follow the mcve guidelines (http://stackoverflow.com/help/mcve); in the present case, it would have helped if you had provided an illustrative JSON snippet based on the output produced by the curl command.
It is necessary to let JSON know that it can continue after an unexpected value while parsing that array. try or ? are perfect options for that.
Bear in mind that it is either necessary to guarantee the data or let the interpreter to know that it is ok to continue. It may sounds redundant, but it is something like a fail-safe approach to prevent unexpected results that are harder to track/notice.
Also, it is necessary to be aware about the differences for "testing" between ? vs try.
Assuming that $sample meets JSON standards the code bellow will work always:
sample='{"myvar":1,"var2":"foo"}'
jq '{newVar: ((.op[]? | .item) // 0)}' <<< $sample
so, the op array is null for $sample as above, but it is clear to jq that it can continue without asking for your intervention/fix.
But if you do assume ? as the same as try, you may get an error (took me a loot to learn this, and it is not clear in the documentation). As an example of improper use of ? we have:
sample='{"myvar":1,"var2":"foo"}'
jq '{newVar: (.op[].item? // 0)}' <<< $sample
So, as op is null it will lead to an error, because you are telling to jq to ignore an error while retrieving .item, while there is mention about the possibility of an error during the attempt to iterate over null (in this case .op[]), and that attempt happened before that point checking for .item.
On the other hand, try would work in this case:
sample='{"myvar":1,"var2":"foo"}'
jq '{newVar: (try .op[].item catch 0)}' <<< $sample
This is a small use difference that can lead to a large difference in the result
Related
I've found few Stack Overflow questions talking about this, but they are all regarding only the :nmap or :noremap commands.
I want a command, not just a keybinding. Is there any way to accomplish this?
Use-case:
When I run :make, I doesn't saves automatically. So I'd like to combine :make and :w. I'd like to create a command :Compile/:C or :Wmake to achieve this.
The general information about concatenating Ex command via | can be found at :help cmdline-lines.
You can apply this for interactive commands, in mappings, and in custom commands as well.
Note that you only need to use the special <bar> in mappings (to avoid to prematurely conclude the mapping definition and execute the remainder immediately, a frequent beginner's mistake: :nnoremap <F1> :write | echo "This causes an error during Vim startup!"<CR>). For custom commands, you can just write |, but keep in mind which commands see this as their argument themselves.
:help line-continuation will help with overly long command definitions. Moving multiple commands into a separate :help :function can help, too (but note that this subtly changes the error handling).
arguments
If you want to pass custom command-line arguments, you can add -nargs=* to your :command definition and then specify the insertion point on the right-hand side via <args>. For example, to allow commands to your :write command, you could use
:command -nargs=* C w <args> | silent make | redraw!
You can combine commands with |, see help for :bar:
command! C update | silent make | redraw!
However, there is a cleaner way to achieve what you want.
Just enable the 'autowrite' option to automatically write
modified files before a :make:
'autowrite' 'aw' 'noautowrite' 'noaw'
'autowrite' 'aw' boolean (default off)
global
Write the contents of the file, if it has been modified, on each
:next, :rewind, :last, :first, :previous, :stop, :suspend, :tag, :!,
:make, CTRL-] and CTRL-^ command; and when a :buffer, CTRL-O, CTRL-I,
'{A-Z0-9}, or `{A-Z0-9} command takes one to another file.
Note that for some commands the 'autowrite' option is not used, see
'autowriteall' for that.
This option is mentioned in the help for :make.
I have found a solution after a bit of trial and error.
Solution for my usecase
command C w <bar> silent make <bar> redraw!
This is for compiling using make and it prints output only if there is nonzero output.
General solution
command COMMAND_NAME COMMAND_TO_RUN
Where COMMAND_TO_RUN can be constructed using more than one command using the following construct.
COMMAND_1_THAN_2 = COMMAND_1 <bar> COMMAND_2
You can use this multiple times and It is very similar to pipes in shell.
Here is a query (gremlin-python; tinkerpop v3.3.3) that inserts a node with properties 'positive' and 'negative', then subtracts one from the other and outputs the result:
g.withSack(0).addV('test').property('positive', 2).property('negative', 3) \
.sack(assign).by('positive') \
.sack(minus).by('negative') \
.sack().next()
Out[13]: -1
Trying the same query except with one of those properties missing produces an error:
g.withSack(0).addV('test').property('positive', 2) \
.sack(assign).by('positive') \
.sack(minus).by('negative') \
.sack().next()
Out[13]: ... GremlinServerError: 500: The property does not exist as the key has no associated value for the provided element: v[36]:negative
I can get around this by coalescing the four possible cases:
g.withSack(0).addV('test').property('positive', 2) \
.coalesce(has('positive').has('negative').sack(assign).by('positive').sack(minus).by('negative').sack(), \
has('positive').values('positive'), \
has('negative').sack(minus).by('negative').sack(), \
sack() \
).next()
It sure is ugly though - is there a neater solution? Ideally there would be the possibility to insert a default value in the absence of a property. I've tried using the 'math' step as well, but it's only slightly neater and doesn't avoid the problem of non-existent properties. To be clear, In the case of multiple traversers, I want a result for each traverser.
I think if you do math() or sack() to solve this, you should probably consider going with the idea of having "required" properties on these vertices on which you intend to do these calculations. That should make things much easier. I do feel like math() would be neater, though you said otherwise:
g.V().as('a').out('knows').as('b').
math("a - b").
by(coalesce(values('hasSomeValue'), constant(0))).
by(coalesce(values('missingValue'), constant(0)))
That's pretty straightforward though perhaps your examples were meant for simplicity and you have a lot more complexity to consider.
I suppose Gremlin could be changed to allow for a second parameter in the by() as some default if the first traversal did not return anything, thus:
g.V().as('a').out('knows').as('b').
math("a - b").
by(values('hasSomeValue'), constant(0)).
by(values('missingValue'), constant(0))
Saves some typing I'd suppose, but I'm not sure that it is as clear to read as with the use of coalesce(). I think I like the explicit use of coalesce() better.
So there is that awesome tool for working with JSON data called jq.
And there is that awesome linux file manager called mc.
One day (today) I came around an idea to integrate these two, so I could easily preview JSON files in a pretty/formatted way using F3 keyboard shortcut when in Midnight Commander.
I opened MC extension file using Command → Edit extension file menu actions and then added following to such opened configuration file:
# json
regex/\.json$
View=%view{ascii} jq < %f
I thought it is straightforward, but unexpectedly it does not work: trying to view the JSON (F3) results in error popup with contents of jq's help page (the same as when you type jq by itself), so starting with: "jq - commandline JSON processr [version 1.5]..."
Can anybody tell me why this configuration is incorrect?
Two minutes after I submitted my question I've been revealed.
I thought that maybe jq does not produce standard output... It led me to this question: How to use jq in a shell pipeline? and so I have modified the extension file to look like:
# json
regex/\.json$
View=%view{ascii} jq '.' < %f
And now it works as expected, piping result of jq to the internal mc viewer.
Thank you, me ;)
You don't have to use redirection < here, you could use just a plain filename %f:
# json
regex/\.json$
View=%view{ascii} jq '.' %f
and as you mentioned you have to use a simple filter: .
For anyone wondering, why this no longer works. In version 4.8.29, MC switched from mc.ext to the new mc.ext.ini ini file, which has slightly different syntax. The new entry should look like this
[JSON]
Regex=\.json$
View=%view{ascii} jq '.' < %f
The [JSON] line is necessary.
All, I am running the following script to load the data on to the Oracle Server using unix box and sqlldr. Earlier it gave me an error saying sqlldr: command not found. I added "SQLPLUS < EOF", it still gives me an error for unexpected end of file syntax error on line 12 but it is only 11 line of code. What seems to be the problem according to you.
#!/bin/bash
FILES='ls *.txt'
CTL='/blah/blah1/blah2/name/filename.ctl'
for f in $FILES
do
cat $CTL | sed "s/:FILE/$f/g" >$f.ctl
sqlplus ID/'PASSWORD'#SERVERNAME << EOF sqlldr SCHEMA_NAME/SCHEMA_PASSWORD control=$f.ctl data=$f EOF
done
sqlplus will never know what to do with the command sqlldr. They are two complementary cmd-line utilities for interfacing with Oracle DB.
Note NO sqlplus or EOF etc required to load data into a schema:
#!/bin/bash
#you dont want this FILES='ls *.txt'
CTL_PATH=/blah/blah1/blah2/name/'
CTL_FILE="$CTL_PATH/filename.ctl"
SCHEMA_NM=SCHEMA_NAME
SCHEMA_PSWD=SCHEMA_PASSWORD
for f in *.txt
do
# don't need cat! cat $CTL | sed "s/:FILE/$f/g" >"$f".ctl
sed "s/:FILE/$f/g" "$CTL_FILE" > "$CTL_PATH/$f.ctl"
#myBad sqlldr "$SCHEMA_NAME/$SCHEMA_PASSWORD" control="$CTL_PATH/$f.ctl" data="$f"
sqlldr $SCHEMA_USER/$SCHEMA_PASSWORD#$SERVER_NAME control="$CTL_PATH/$f.ctl" data="$f" rows=10000 direct=true errors=999
done
Without getting too philosophical, using assignments like FILES=$(ls *.txt) is a bad habit to get into. By contrast, for f in *.txt will deal correctly for files with odd characters in them (like spaces or other syntax breaking values). BUT the other habit you do want to get into is to quote all variable references (like $f), with dbl-quotes : "$f", OK? ;-) This is the otherside of protection for files with spaces etc embedded in them.
In the edit update, I've varibalized your CTL_PATH and CTL_FILE. I think I understand your intent, that you have 1 std CTL_FILE that you pass thru sed to create a table specific .ctl file (a good approach in my experience). Note that you don't need to use cat to send a file to sed, but your use to create a altered file via redirection (> $f.ctl) is very shell-like too.
In 2nd edit update, I looked here on S.O. and found an example sqlldr cmdline that has the correct syntax and have modified to work with your variable names.
To finish up,
A. Are you sure the Oracle Client package is installed on the machine
that you are running your script on?
B. Is the /path/to/oracle/client/tools/bin included in your working
$PATH?
C. try which sqlldr. If you don't get anything, either its not
installed or its not in the path.
D. If not installed, you'll have to get it installed.
E. Once installed, note the directory that contains the sqlldr cmd.
find / -name 'sqlldr*' will take a long time to run, but it will
print out the path you want to use.
F. Take the "path" part of what is returned (like
/opt/oracle/11.2/client/bin/ (but not the sqlldr at the end), and
edit script at 2nd line with
(Txt added to appease the S.O. Formatter ;-) )
export ORCL_PATH="/path/you/found/to/oracle/client"
export PATH="$ORCL_PATH:$PATH"
These steps should solve any remaining issues. If this doesn't work, see if there is someone where you work that understands your local computing environment that can help explain any missing or different steps.
IHTH
I found out the next issue in this simple code:
let () =
print_endline "Hello";
print_endline (Unix.getlogin ())
Running in the normal case, with ./a.out gives:
Hello
ricardo
But running like ./a.out </dev/null makes Unix.getlogin fail:
Hello
Fatal error: exception Unix.Unix_error(20, "getlogin", "")
Any idea why this happens?
Redirecting the input of a program overrides its controlling terminal. Without a controlling terminal, there is no login to be found:
$ tty
/dev/pts/2
$ tty < /dev/null
not a tty
You can, however, still find a user's name (perhaps) by getting the user's id (getuid) and looking up his passwd entry (related docs) (getpwuid), then finding his username in it.
Depending on your application:
if you don't really care about the value returned by "getlogin", you can do something like:
try
Unix.getlogin ()
with _ -> Sys.getenv "USER"
you will probably get something better than getuid, since it will also work for programs with Set-User-ID flags (sudo/su).
if you really care about the value returned by "getlogin", i.e. you really want to know who is logged in, you should just fail when getlogin fails. Any other solution will give you only an approximation of the correct result.