I have an issue with algolia settings. I can not import or export settings from aloglia. There is no settings or tools to do this.
I want to do it using my own script. How is it possible? Is there any alternative to do this or i have to create a script for that?
Check out the Algolia CLI tool!
Installation: npm install -g #algolia/cli
Docs: https://github.com/algolia/algolia-cli
While you can still certainly write your own scripts to import/export settings or records, with the Algolia CLI tool you can also do it at the command line like so:
$ algolia getsettings -a <algoliaAppId> -k <algoliaApiKey> -n <algoliaIndexName>
and
$ algolia setsettings -a <algoliaAppId> -k <algoliaApiKey> -n <algoliaIndexName> -s <sourceFilepath> -p <setSettingsParams>
The best way to export/import index settings is to use Algolia's REST API clients and the {get,set}_settings methods.
Building a small script wrapping those 2 commands is pretty straight forward.
Sepehr's answer is really helpful in pointing out how to achieve it with Algolia CLI. A time saver!
Here is the exact command you need to execute in your command line in order to:
Export index:
algolia export -a <algoliaAppId> -k <algoliaApiKey> -n <algoliaIndexName> -o <outputPath> -p <algoliaParams>
Example: algolia export -a EXAMPLE_APP_ID -k EXAMPLE_API_KEY -n EXAMPLE_INDEX_NAME -o ~/Desktop/example_output_folder/ -p '{"filters":["category:book"]}'
Params -p argument is optional and you can skip it.
Import index:
algolia import -s <sourceFilepath> -a <algoliaAppId> -k <algoliaApiKey> -n <algoliaIndexName> -b <batchSize> -t <transformationFilepath> -m <maxconcurrency> -p <csvToJsonParams>
Example: algolia import -s ~/Desktop/example_source_directory/ -a EXAMPLE_APP_ID -k EXAMPLE_API_KEY -n EXAMPLE_INDEX_NAME -b 5000 -t ~/Desktop/example_transformations.js -m 4 -p '{"delimiter":[":"]}'
More at https://github.com/algolia/algolia-cli#examples
Related
Please we're facing some issues when installing Varnish 6.0.8 on ubutnu 18.04.6 OS, it doesn't create the secret file inside the /etc/varnish dir as shown below:
enter image description here
we use the following script to for installation :
curl -s https://packagecloud.io/install/repositories/varnishcache/varnish60lts/script.deb.sh | sudo bash
can someone please help ?
PS: we tried to install later versions (6.6 and 7.0.0) and we got the same issue.
Form a security point of view, remote CLI access is not enabled by default. You can see this when looking at /lib/systemd/system/varnish.service:
[Unit]
Description=Varnish Cache, a high-performance HTTP accelerator
After=network-online.target nss-lookup.target
[Service]
Type=forking
KillMode=process
# Maximum number of open files (for ulimit -n)
LimitNOFILE=131072
# Locked shared memory - should suffice to lock the shared memory log
# (varnishd -l argument)
# Default log size is 80MB vsl + 1M vsm + header -> 82MB
# unit is bytes
LimitMEMLOCK=85983232
# Enable this to avoid "fork failed" on reload.
TasksMax=infinity
# Maximum size of the corefile.
LimitCORE=infinity
ExecStart=/usr/sbin/varnishd \
-a :6081 \
-a localhost:8443,PROXY \
-p feature=+http2 \
-f /etc/varnish/default.vcl \
-s malloc,256m
ExecReload=/usr/sbin/varnishreload
[Install]
WantedBy=multi-user.target
There are no -T and -S parameters in the standard systemd configuration. However, you can enable this by modifying the systemd configuration yourself.
Just run sudo systemctl edit --full varnish to edit the runtime configuration and add a -T parameter to enable remote CLI access.
Be careful with this and make sure you restrict access to this endpoint via firewalling rules.
Additionally you'll add -S /etc/varnish/secret as a varnishd runtime parameter in /lib/systemd/system/varnish.service.
You can use the following command to add a random unique value to the secret file:
uuidgen | sudo tee /etc/varnish/secret
This is what your runtime parameters would look like:
ExecStart=/usr/sbin/varnishd \
-a :6081 \
-a localhost:8443,PROXY \
-p feature=+http2 \
-f /etc/varnish/default.vcl \
-s malloc,2g \
-S /etc/varnish/secret \
-T :6082
When you're done just run the following command to restart Varnish:
sudo systemctl restart varnish
I prepared an ARM template, template creates listed azure resources: linux VM deployment, Storage deployment, file share in this Storage Account.
ARM works fine, but I would like to add one thing, mounting file share to a linux VM (using script from file share blade, script proposed by Microsoft).
I would like to use Custom Script Extension, and then use "commandToExecute" option to paste inline linux script (this one for file share mounting).
My question is: how to retrieve password to file share and then pass it as a parameter to the inline script. Is it possible? Is it possible to paste file share mounting script as an inline script in ARM template? maybe there is any other way to complete my task? I know that I can store script in a storage account and in ARM template put "blob SAS URL" in the Custom Extension ARM area, but still is a question how to retrieve the password to File Shares, below is the script for File share mount.
sudo mkdir /mnt/wsustorageaccount
if [ ! -d "/etc/smbcredentials" ]; then
sudo mkdir /etc/smbcredentials
fi
if [ ! -f "/etc/smbcredentials/StorageAccountName.cred" ]; then
sudo bash -c 'echo "username=xxxxx" >> /etc/smbcredentials/StorageAccountName.cred'
sudo bash -c 'echo "password=xxxxxxx" >> /etc/smbcredentials/StorageAccountName.cred'
fi
sudo chmod 600 /etc/smbcredentials/StorageAccountName.cred
sudo bash -c 'echo "//StorageAccount.file.core.windows.net/test /mnt/StorageAccount cifs nofail,vers=3.0,credentials=/etc/smbcredentials/StorageAccountName.cred,dir_mode=0777,file_mode=0777,serverino" >> /etc/fstab'
sudo mount -t cifs //StorageAccountName.file.core.windows.net/test /mnt/StorageAccountName -o vers=3.0,credentials=/etc/smbcredentials/StorageAccountName.cred,dir_mode=0777,file_mode=0777,serverino
You can use this quickstart example:
listKeys(variables('storageAccountId'), '2019-04-01').keys[0].value
Can anyone tell me is it possible to execute pmrep commands in Informatica Cloud services to import and export workflow object?
pmrep connect -r MY_REP -d MY_DOMAIN -n MY_USER -x MY_PASSWORD
./pmrep objectexport -o workflow -f $FOLDER -n $WORKFLOW -m -s -b -r -u ${EXPORTDIR}/${FOLDER}_${WORKFLOW}.xml
That's not possible in Informatica cloud, you don't have access to the repository it's hosted by Informatica.
You need to use REST API to import and export objects from IICS repository, the document is in the following link.
https://network.informatica.com/docs/DOC-17563
I imported a .p12 file to Login.keychain with cli:
$ security import /Users/xxx/Desktop/Certificates.p12 -k Login.keychain -P 1234 -A
But how to delete this file from Login.keychain with cli?
System: MacOS
Probably you can use below command
ls -l ~/Library/Keychains/
rm -rf ~/Library/Keychains/Login.keychain
I'm trying to make use of this tool, to check for security holes in our websites, 404's, etc:
https://code.google.com/archive/p/skipfish/wikis/SkipfishDoc.wiki
As a test, I'm running it with:
./skipfish -B .google-analytics.com -B .googleapis.com -r 800000 -M -L -e -m 5 -g 10 -o output_folder8 http://www.ultranerds.co.uk
I'm hoping to automate this on a cron, and then email out the output. Is there a way to "auto start" it? I was hoping I could do something like I use to confirm a copy of files (without having to confirm);
yes | cp -rf /installer/files_to_copy/* /
Thanks!
OK, so this kinda works:
yes | ./skipfish -B .google-analytics.com -B .googleapis.com -r 800000 -M -L -e -m 5 -g 10 -o output_folder8 http://www.ultranerds.co.uk
The downside, is that it flashes up like:
and then:
...and then back to the other screen. So it makes it a bit hard to track whats going on.