Encrypted chef data bag json file, how to decrypt and show contents? - encryption

There are encrypted data bags in json files with some values I need to change. I need to run something like...
$ knife data bag from file show --secret-file path/to/secret DATABAGNAME --config path/to/knife.rb
But this command gives the error: Could not find or open file 'DATABAGNAME' in current directory or in 'data_bags/show/ewe-jenkins'. So obviously the command is not quite right. I need help figuring out the syntax...
I need a command that can be run from the chef-repo, or the data_bags directory, that will allow me to see the unencrypted values of the json file data_bags. Ultimately I want to change some values, but getting the unencrypted values would be a good place to start :) thanks!

Since you're talking about local json files I'll assume you are using chef-zero / local-mode. The json file can indeed be encrypted and the content can be decrypted with knife.
Complete example:
Create key and databag item:
$ openssl rand -base64 512 | tr -d '\r\n' > /tmp/encrypted_data_bag_secret
$ knife data bag create mydatabag secretstuff --secret-file /tmp/encrypted_data_bag_secret -z
Enter this:
{
"id": "secretstuff",
"firstsecret": "must remain secret",
"secondsecret": "also very secret"
}
The json file is indeed encrypted:
# cat data_bags/mydatabag/secretstuff.json
{
"id": "secretstuff",
"firstsecret": {
"encrypted_data": "VafoT8Jc0lp7o4erCxz0WBrJYXjK6j+sJ+WGKJftX4BVF391rA1zWyHpToF0\nqvhn\n",
"iv": "MhG09xFcwFAqX/IA3BusMg==\n",
"version": 1,
"cipher": "aes-256-cbc"
},
"secondsecret": {
"encrypted_data": "Epj+2DuMOsf5MbDCOHEep7S12F6Z0kZ5yMuPv4a3Cr8dcQWCk/pd58OPGQgI\nUJ2J\n",
"iv": "66AcYpoF4xw/rnYfPegPLw==\n",
"version": 1,
"cipher": "aes-256-cbc"
}
}
Show decrypted content with knife:
# knife data bag show mydatabag secretstuff -z --secret-file /tmp/encrypted_data_bag_secret
Encrypted data bag detected, decrypting with provided secret.
firstsecret: must remain secret
id: secretstuff
secondsecret: also very secret

I think you are confusing the knife data bag show and knife data bag from file commands. The former is for displaying data from the server, the latter is for uploading it. You have both on the command line.

Related

AES256 encrypted data unable to be copied and paste

I am using OpenSSL to encrypt my data. Assuming I have 3 rows of data(for simplicty)
0123456789
987654321
121212121
After encrypting, I get
Salted__èøm¬è!^¬ü
?‘¡ñ1•yÈ}, .◊¬ó≤|Úx$mø©
However, when I copy using my Mac's CMD+ C, then I paste in another file to be decrpyted, i get this error
bad decrypt
0076160502000000:error:1C80006B:Provider routines:ossl_cipher_generic_block_final:wrong final block length:providers/implementations/ciphers/ciphercommon.c:429:
However if I did not copy and paste the encrpyted data, it can be decrypted properly. I believe is due to the spacings changed.. Is it that we cannot copy the data to another file to be decrpyted and MUST use the exact file that was encrpyted?

How to avoid Jupyter cell-ids from changing all the time and thereby spamming the VCS diffs?

As discussed in q/66678305, newer Jupyter versions store in addition to the source code and output of cells an ID for the purpose of e.g. linking to a cell.
However, these IDs aren't stable but often change even when the cell's source code was not touched. As a result, if you have the .ipynb file under version control with e.g. git, the commits end up having lots of rather funny sounding “changed lines” that don't correspond to any actual change made in the commit. Like,
{
"cell_type": "code",
"execution_count": null,
- "id": "respected-breach",
+ "id": "incident-winning",
"metadata": {},
"outputs": [],
Is there a way to prevent this?
Answer for Git on Linux. Probably also works on MacOS, but not Windows.
It is good practice to not VCS the .ipynb files as saved by Jupyter, but instead a filtered version that does not contain all the volatile information. For this purpose, various git hooks are available; the one I'm using is based on https://github.com/toobaz/ipynb_output_filter/blob/master/ipynb_output_filter.py.
Strangely enough, it turns out this script can not be modified to remove the "id" field from cells. Namely, if you try to remove that field in the filtering loop, like with
for field in ("prompt_number", "execution_number", "id"):
if field in cell:
del cell[field]
then the write function from jupyter_nbformat will just put an id back in. It is possible to merely change the id to something constant, but then Jupyter will complain about nonunique ids.
As a hack to circumvent this, I now use this filter with a simple grep to delete the ID:
#!/bin/bash
grep -v '^ *"id": "[a-z\-]*",$'
Store that in e.g. ~/bin/ipynb_output_filter.sh, make it executable (chmod +x ~/bin/ipynb_output_filter.sh) and ensure you have the following ~/.gitattributes file:
*.ipynb filter=dropoutput_ipynb
and in your git config (either global ~/.gitconfig or project)
[core]
attributesfile = ~/.gitattributes
[filter "dropoutput_ipynb"]
clean = ~/bin/ipynb_output_filter.sh
smudge = cat
If you want to use a standard python filter in addition to that, you can invoke it before the grep in ~/bin/ipynb_output_filter.sh, like
#!/bin/bash
~/bin/ipynb_output_filter.py | grep -v '^ *"id": "[a-z\-]*",$'

I don't understand how many file,i have to create

Implement the Caesar Cipher algorithm to encrypt and decrypt a file contents using C language. The cipher basic all use algorithm . Your program should have two C files named encrypt.c and decrypt.c that contains encrypt() and decrypt() functions correspondently for the purpose. In the encryption.c file, use the main() function to take input from a “input.txt” file and store the encrypted message to “enc_msg.txt” file. In the decryption.c file, use the main() function to take input from a “enc_msg.txt” file and store the decrypted message to “dec_msg.txt” file and print the decrypted message in console output as well. The key is 3.
Thanks
Create two .c files encrypt.c and decrypt.c
Create sample data input.txt file
Run your encrypt program to create output file enc_msg.txt from input.txt file
Run your decrypt program to create output file dec_msg.txt from input enc_msg.txt file
So you need to create 3 files encrypt.c decrypt.c and input.txt
And running your programs will generate two more files enc_msg.txt and dec_msg.txt

How to list subfolders in Artifactory

I'm trying to write a script which cleans up old builds in my generic file repository in Artifactory. I guess the first step would be to look in the repository and check which builds are in there.
Each build shows up as a subfolder of /foo, so for example I have folders /foo/123, /foo/124, /foo/125/, etc.
There doesn't seem to be a ls or dir command. So I tried the search command:
jfrog rt search my-repo/foo/*
But this recursively lists all files, which is not what I'm looking for. I just need the list of direct subfolders. I also tried
jfrog rt search my-repo/foo/* --recursive=false
but this doesn't return any results, because the search command only returns files, not folders.
How do I list the subfolders of a given folder in an Artifactory repository?
Just one more way to do it with curl and jq
curl -s http://myatifactory.domain:4567/artifactory/api/storage/myRepo/myFolder | jq -r '.children[] |select(.folder==true) |.uri'
Explanation: Curl is used to get the folder info and that is piped to JQ which then displays all the uri keys of the children array whose folder key has value true.
Just for easier understanding - the json that curl gets looks something like this (example from artifactory docs)
{
"uri": "http://localhost:8081/artifactory/api/storage/libs-release-local/org/acme",
"repo": "libs-release-local",
"path": "/org/acme",
"created": ISO8601 (yyyy-MM-dd'T'HH:mm:ss.SSSZ),
"createdBy": "userY",
"lastModified": ISO8601 (yyyy-MM-dd'T'HH:mm:ss.SSSZ),
"modifiedBy": "userX",
"lastUpdated": ISO8601 (yyyy-MM-dd'T'HH:mm:ss.SSSZ),
"children": [
{
"uri" : "/child1",
"folder" : "true"
},{
"uri" : "/child2",
"folder" : "false"
}
]
}
and for it the output of the command would be /child1.
Of course here it's assumed that artifactory repo myRepo allows anonymous read.
You should have a look to AQL (Artifactory Query Langage) here : https://www.jfrog.com/confluence/display/RTF/Artifactory+Query+Language
as an example the following AQL will retrieve all folders located in "my-repo" under "foo" folder and will display the result ordered by folder's name :
items.find(
{
"type":"folder",
"repo":{"$eq":"my-repo"},
"path":{"$eq":"foo"}
}
)
.include("name")
.sort({"$desc":["name"]})
For cleanup you can also have a look at the following example which gives a list of the 10 biggest artifacts created more than a month ago that have never been downloaded :
items.find(
{
"type":"file",
"repo":{"$eq":"my-repo"},
"created":{"$before":"1mo"},
"stat.downloads":{"$eq":null}
}
)
.include("size","name")
.sort({"$desc":["size"]})
.limit(10)
Based on jroquelaure's answer, I ended up with the following. The key thing that was still missing was that you have to convert the "items.find" call into JSON when putting it in a filespec. There is an example of that in the filespec documentation which I missed at first.
I put this JSON in a test.aql file:
{
"files":
[
{
"aql":
{
"items.find" :
{
"type":"folder",
"repo":{"$eq":"my-repo"},
"path":{"$eq":"foo"}
}
}
}
]
}
Then I call jfrog rt search --spec=test.aql.
The jfrog cli now includes the --include-dirs option for search.
The command:
jf rt search --recursive=false --include-dirs path/
will essentially act like an ls.
By default, it searches for files, if you want to list directories, add one more property --include-dirs
Refer the link for additional parameters. jfrog search
Here is the command.
jf rt search --recursive=false --include-dirs=true path/
Response:
[
{
"path": "artifactory-name/path",
"type": "folder",
"created": "",
"modified": ""
}
]
A cleaner approach is to tell Artifactory about builds, and let it discard old ones.
There are 3 parts to this. My examples are for the jfrog command line utility:
When uploading files with the "jfrog rt upload" command, use the --build-name someBuildName and --build-number someBuildNumber arguments. This links the uploaded files to a certain build.
After uploading files, publish the build with "jfrog rt build-publish someBuildName someBuildNumber"
To clean up all but the 3 latest builds, use "jfrog rt build-discard --max-builds=3 someBuildName"

Not delete In/Outbound file after Monitor() CMD in Asterisk

I'm recording call via Monitor() Command.
When this command is running i can see two different files (Filename-in.wav and Filename-out.wav) and when Monitor() command is finished it mix those two file and merge them to one (Filename.wav) file.
So the problem is that i want to keep both file after Monitor Cmd Execution but i didn't found a way to do it.
So after the final execution of the Monitor command i will have three file not only one
Ex:
Filename-in.wav
Filename-out.wav
Filename.wav (the mixed one with outbound and inbound voice
So is there any body who can give me an easy solution
You can use custom script for mixmonitor. In that script you can do whatever you want, including files like you described.
http://www.voip-info.org/wiki/view/MixMonitor
Note, that in Filename.wav you have both inbound and outbound in different channels. So you can easy got inbound only by mute left channel and outbound only by mute right channel.
My solution is to change the code of the res_monitore.c and recompile it again.
This is the portion of code that delete the raw file
00295 if (delfiles) {
00296 snprintf(tmp2,sizeof(tmp2), "( %s& rm -f \"%s/%s-\"* ) &",tmp, dir ,name); /* remove legs when done mixing */
00297 ast_copy_string(tmp, tmp2, sizeof(tmp));
00298 }
Just we have to add this { delfiles = 0; }in line 00294
00294 delfiles = 0;
00295 if (delfiles) {
00296 snprintf(tmp2,sizeof(tmp2), "( %s& rm -f \"%s/%s-\"* ) &",tmp, dir ,name); /* remove legs when done mixing */
00297 ast_copy_string(tmp, tmp2, sizeof(tmp));
00298 }
I changed delfiles = 0 to force the function to not remove the file.
After that this is the command that you have to type :
cd /usr/src/asterisk-1.8.23.0
make
cp ./res/res_monitor.so /res/res_monitor.so.backup
cp ./res/res_monitor.so /usr/lib/asterisk/modules
/etc/ini.d/asterisk restart
and u keep using the Monitor() command as before with the functionality that keep the raw file (Filename-in.wav and Filename-out.wav and of course Filename.wav)
What arheops did not understand in that conversation is that the "command" argument is executed after the "in" and "out" legs have been mixed by (Mix)Monitor.
There is no other way to save the "receive" and "transmit" feeds than to either change the source code as l3on1das suggested (not good practice though), or upgrade to Asterisk 11+, which now (not surprisingly) supports the options -t and -r for MixMonitor() to respectively save the transmitted and received legs in addition to the mixed output.
Good luck to anyone digging in asterisk for speech segmentation.

Resources