openstack stack update: No images matching - openstack

I wanted to update my stack to just change its flavor. I use the same template and just changes the name of the flavor in the environment file:
openstack stack update myStack -t heat/server_base.tpl.yaml -e heat/dev/redis_server.env.yaml
Tried also like this:
openstack stack update myStack --existing --parameter flavor=a1-ram2-disk20-perf1
Both gives me the same error:
"resource_status_reason": "resources.db_instance_group: Property error: resources[0].properties.image: Error validating value 'Debian 11.5 bullseye': No images matching {'name': 'Debian 11.5 bullseye'}."
I already done this on multiple other stacks without any problems, any ideas ?

Related

Dataflow from Colab issue

I'm trying to run a Dataflow job from Colab and getting the following worker error:
sdk_worker_main.py: error: argument --flexrs_goal: invalid choice: '/root/.local/share/jupyter/runtime/kernel-1dbd101c-a79e-432e-89b3-5ba68df104d7.json' (choose from 'COST_OPTIMIZED', 'SPEED_OPTIMIZED')
I haven't provided the flexrs_goal argument, and if I do it doesn't fix this issue. Here are my pipeline options:
beam_options = PipelineOptions(
runner='DataflowRunner',
project=...,
job_name=...,
temp_location=...,
subnetwork='regions/us-west1/subnetworks/default',
region='us-west1'
)
My pipeline is very simple, it's just:
with beam.Pipeline(options=beam_options) as pipeline:
(pipeline
| beam.io.ReadFromBigQuery(
query=f'SELECT column FROM {BQ_TABLE} LIMIT 100')
| beam.Map(print))
It looks like the command line args for the sdk worker are getting polluted by jupyter somehow. I've rolled back to the past two apache-beam library versions and it hasn't helped. I could move over to Vertex Workbench but I've invested a lot in this Colab notebook (plus I like the easy sharing) and I'd rather not migrate.
Figured it out. The PipelineOptions constructor will pull in sys.argv if no parameter is given for the first argument (called flags). In my case it was pulling in the command line args that my jupyter notebook was started with and passing them as Beam options to the workers.
I fixed my issue by doing this:
beam_options = PipelineOptions(
flags=[],
...
)

define SAMPLE for different dir name and sample name in snakemake code

I have written a snakemake code to run bwa_map. Fastq files are with different folder name and different sample name (paired end). It shows error as 'SAMPLES' is not defined. Please help.
Error:
$snakemake --snakefile rnaseq.smk mapped_reads/EZ-123-B_IGO_08138_J_2_S101_R2_001.bam -np
*NameError in line 2 of /Users/singhh5/Desktop/tutorial/rnaseq.smk:
name 'SAMPLES' is not defined
File "/Users/singhh5/Desktop/tutorial/rnaseq.smk", line 2, in *
#SAMPLE DIRECTORY
fastq
Sample_EZ-123-B_IGO_08138_J_2
EZ-123-B_IGO_08138_J_2_S101_R1_001.fastq.gz
EZ-123-B_IGO_08138_J_2_S101_R2_001.fastq.gz
Sample_EZ-123-B_IGO_08138_J_4
EZ-124-B_IGO_08138_J_4_S29_R1_001.fastq.gz
EZ-124-B_IGO_08138_J_4_S29_R2_001.fastq.gz
#My Code
expand("~/Desktop/{sample}/{rep}.fastq.gz", sample=SAMPLES)
rule bwa_map:
input:
"data/genome.fa",
"fastq/{sample}/{rep}.fastq"
conda:
"env.yaml"
output:
"mapped_reads/{rep}.bam"
threads: 8
shell:
"bwa mem {input} | samtools view -Sb -> {output}"
The specific error you are seeing is because the variable SAMPLES isn't set to anything before you use it in expand.
Some other issues you may run into:
Output file is missing the {sample} wildcard.
The value of threads isn't passed into bwa or samtools
You should place your expand into the input directive of the first rule in your snakefile, typically called all to properly request the files from bwa_map.
You aren't pairing your reads (R1 and R2) in bwa.
You should look around stackoverflow or some github projects for similar rules to give you inspiration on how to do this mapping.

Use AWK to safely search and replace URLs in Wordpress SQL-Dump

I am working on a webtool to mirror a Wordpress installation into a development system.
The aim is to have a Live system for production and a development system for testing. The webtool then offers a one-click-sync between those systems.
Each of the systems is standalone, with its own webroot, database and url.
I am having a trouble with the database dump in which I have to search all the references to the source and replace them with the URL of the destination (e.g.: "www.example.com" -> "www-dev.example.com").
What I need to do is:
Find all occurences of the URL and replace it with the new one.
IF the match also matches the format of a serialized string it should set the Field-Seperator, and reload the match, so that the actual length can be set in the array.
In a first attempt I tried to solve this with a 'sed' command looking as follows: sed -i.orig 's/360\.example\.com/360-dev\.my\.example\.dev/g'.
This didn't work because there are serialized arrays contained in the dump, containing the url. The sed command is no good for updating the string-length-indicator of the serialized arrays.
My latest attempt is to use an awk as suggested here, because it's capable of arithmetic operations.
My awk script looks like this:
/360[.]example[.]com/ {
sub("360.example.com", "360-dev.my.example.dev");
if ($0 ~ /s:[[:digit:]]+:["](http[s]?:\/\/)?360[.]example[.]com["]/){
FS="\"";
$0=$0;
n=length($2)-1;
sub(/:[[:digit:]]+:/, ":" n ":");
}
} 1
There seem to be some errors in my script, which I can't find. It does not replace all of the occurrences of the url and completely skips the length-indicator-update.
How can I fix my script to achieve what I want to do?
EDIT: (Added Input/Output samples)
Databasedump consists of the whole wordpress-database with CREATE TABLE IF NOT EXISTS and INSERT statements for each table and record.
Normal (unserialized) occurence:
(36, 'home', 'http://360.example.com/blogname', 'yes'),
should result in:
(36, 'home', 'http://360-dev.my.example.dev/blogname', 'yes'),
Serialized occurence:
(404, 'wp-maintenance-mode', 'a:21:{s:6:"active";i:1;s:4:"time";i:0;s:4:"link";i:1;s:7:"support";i:0;s:10:"admin_link";i:1;s:7:"rewrite";s:0:"";s:6:"notice";i:1;s:4:"unit";i:1;s:5:"theme";i:0;s:8:"styleurl";s:69:"http://360.example.com/wp-content/themes/blogname/css/maintenance.css";s:5:"index";i:0;s:5:"title";s:0:"";s:6:"header";s:0:"";s:7:"heading";s:0:"";s:4:"text";s:12:"Example Text";s:7:"exclude";a:1:{i:0;s:0:"";}s:6:"bypass";i:0;s:4:"role";a:1:{i:0;s:13:"administrator";}s:13:"role_frontend";a:1:{i:0;s:13:"administrator";}s:5:"radio";i:0;s:4:"date";s:0:"";}', 'yes'),
Should result in:
(404, 'wp-maintenance-mode', 'a:21:{s:6:"active";i:1;s:4:"time";i:0;s:4:"link";i:1;s:7:"support";i:0;s:10:"admin_link";i:1;s:7:"rewrite";s:0:"";s:6:"notice";i:1;s:4:"unit";i:1;s:5:"theme";i:0;s:8:"styleurl";s:76:"http://360-dev.my.example.dev/wp-content/themes/blogname/css/maintenance.css";s:5:"index";i:0;s:5:"title";s:0:"";s:6:"header";s:0:"";s:7:"heading";s:0:"";s:4:"text";s:12:"Example Text";s:7:"exclude";a:1:{i:0;s:0:"";}s:6:"bypass";i:0;s:4:"role";a:1:{i:0;s:13:"administrator";}s:13:"role_frontend";a:1:{i:0;s:13:"administrator";}s:5:"radio";i:0;s:4:"date";s:0:"";}', 'yes'),
EDIT 2:
Now using wp-cli to do the task of search & replace.
I've got a multisite setup with blogs numbered (2,3,9).
Executing wp search-replace --url=360.example.com '360.example.com' '360-dev.my.example.dev' results in an error, telling me that the Single-Site tables (wp_redirection_items and wp_redirection_groups) cannot be found.
This is true, because they really do not exist, but rather for each blog (e.g: wp_2_redirection_items and so on). This error results in over 9000 missed occurences in s&r. It's possible to replace everything with wp search-replace --url=360.example.com '360.example.com' '360-dev.my.example.com' wp_*. But it still throws the error.
As suggested by #archimiro the task now is done by wp-cli.
But as I am also having a multisite setup, which lead to some errors I had to figure out the command for a full database search-replace task.
The final command:
wp search-replace --url=360.example.com '360.example.com' '360-dev.my.example.dev' wp_*.
Without explicitly telling wp-cli to search&replace in ALL (wp_*) tables it would stop by the time a "table not found" error is thrown.
Also not awk or wpcli but this is a php function I wrote that seems to work well.
function snr($search, $replace, $inputfile, $outputfile){
$sql = file_get_contents($inputfile);
$sql1 = str_replace($search,$replace,$sql);
file_put_contents($outputfile,$sql1);
$serstrings = preg_split("/(?<=[{;])s:/",$sql1);
foreach($serstrings as $i=>$serstring) {
if (!!strpos($serstring, $replace)){
$justString = str_replace("\\","",str_replace("\\\\","j",explode('\\";',explode(':\\"',$serstring)[1])[0]));
$correct = strlen($justString);
$serstrings[$i] = preg_replace('/^\d+/',$correct, $serstrings[$i]);
}
}
file_put_contents($outputfile,implode("s:",$serstrings));
}
I've used this in past with success:
sed 's|360\.example\.com|360-dev\.my\.example\.dev|g' com.sql > local.sql
Edit: sorry not awk, but neither is wp-cli.

Debugging bitbake pkg_postinst_${PN}: Append to config-file installed by other recipe

I'm writing am openembedded/bitbake recipe for openembedded-classic. My recipe RDEPENDS on keyutils, and everything seems to work, except one thing:
I want to append a single line to the /etc/request-key.conf file installed by the keyutils package. So I added the following to my recipe:
pkg_postinst_${PN} () {
echo 'create ... more stuff ..' >> ${sysconfdir}/request-key.conf
}
However, the intended added line is missing in my resulting image.
My recipe inherits update-rc.d if that makes any difference.
My main question is: How do i debug this? Currently I am constructing an entire rootfs image, and then poke-around in that to see, if the changes show up. Surely there is a better way?
UPDATE:
Changed recipe to:
pkg_postinst_${PN} () {
echo 'create ... more stuff ...' >> ${D}${sysconfdir}/request-key.conf
}
But still no luck.
As far as I know, postinst runs at rootfs creation, and only at first boot if rootfs fails.
So there is a easy way to execute something only first boot. Just check for $D, like this:
pkg_postinst_stuff() {
#!/bin/sh -e
if [ x"$D" = "x" ]; then
# do something at first boot here
else
exit 1
fi
}
postinst scripts are ran at roots time, so ${sysconfdir} is /etc on your host. Use $D${sysconfdir} to write to the file inside the rootfs being generated.
OE-Classic is pretty ancient so you really should update to oe-core.
That said, Do postinst's run at first boot? I'm not sure. Also look in the recipes work directory in the temp directory and read the log and run files to see if there are any clues there.
One more thing. If foo RDEPENDS on bar that just means "when foo is installed, bar is also installed". I'm not sure it makes assertions about what is installed during the install phase, when your postinst is running.
If using $D doesn't fix the problem try editing your postinst to copy the existing file you're trying to edit somewhere else, so you can verify that it exists in the first place. Its possible that you're appending to a file that doesn't exist yet, and the the package that installs the file replaces it.

Unix SQLLDR scipt gives 'Unexpected End of File' error

All, I am running the following script to load the data on to the Oracle Server using unix box and sqlldr. Earlier it gave me an error saying sqlldr: command not found. I added "SQLPLUS < EOF", it still gives me an error for unexpected end of file syntax error on line 12 but it is only 11 line of code. What seems to be the problem according to you.
#!/bin/bash
FILES='ls *.txt'
CTL='/blah/blah1/blah2/name/filename.ctl'
for f in $FILES
do
cat $CTL | sed "s/:FILE/$f/g" >$f.ctl
sqlplus ID/'PASSWORD'#SERVERNAME << EOF sqlldr SCHEMA_NAME/SCHEMA_PASSWORD control=$f.ctl data=$f EOF
done
sqlplus will never know what to do with the command sqlldr. They are two complementary cmd-line utilities for interfacing with Oracle DB.
Note NO sqlplus or EOF etc required to load data into a schema:
#!/bin/bash
#you dont want this FILES='ls *.txt'
CTL_PATH=/blah/blah1/blah2/name/'
CTL_FILE="$CTL_PATH/filename.ctl"
SCHEMA_NM=SCHEMA_NAME
SCHEMA_PSWD=SCHEMA_PASSWORD
for f in *.txt
do
# don't need cat! cat $CTL | sed "s/:FILE/$f/g" >"$f".ctl
sed "s/:FILE/$f/g" "$CTL_FILE" > "$CTL_PATH/$f.ctl"
#myBad sqlldr "$SCHEMA_NAME/$SCHEMA_PASSWORD" control="$CTL_PATH/$f.ctl" data="$f"
sqlldr $SCHEMA_USER/$SCHEMA_PASSWORD#$SERVER_NAME control="$CTL_PATH/$f.ctl" data="$f" rows=10000 direct=true errors=999
done
Without getting too philosophical, using assignments like FILES=$(ls *.txt) is a bad habit to get into. By contrast, for f in *.txt will deal correctly for files with odd characters in them (like spaces or other syntax breaking values). BUT the other habit you do want to get into is to quote all variable references (like $f), with dbl-quotes : "$f", OK? ;-) This is the otherside of protection for files with spaces etc embedded in them.
In the edit update, I've varibalized your CTL_PATH and CTL_FILE. I think I understand your intent, that you have 1 std CTL_FILE that you pass thru sed to create a table specific .ctl file (a good approach in my experience). Note that you don't need to use cat to send a file to sed, but your use to create a altered file via redirection (> $f.ctl) is very shell-like too.
In 2nd edit update, I looked here on S.O. and found an example sqlldr cmdline that has the correct syntax and have modified to work with your variable names.
To finish up,
A. Are you sure the Oracle Client package is installed on the machine
that you are running your script on?
B. Is the /path/to/oracle/client/tools/bin included in your working
$PATH?
C. try which sqlldr. If you don't get anything, either its not
installed or its not in the path.
D. If not installed, you'll have to get it installed.
E. Once installed, note the directory that contains the sqlldr cmd.
find / -name 'sqlldr*' will take a long time to run, but it will
print out the path you want to use.
F. Take the "path" part of what is returned (like
/opt/oracle/11.2/client/bin/ (but not the sqlldr at the end), and
edit script at 2nd line with
(Txt added to appease the S.O. Formatter ;-) )
export ORCL_PATH="/path/you/found/to/oracle/client"
export PATH="$ORCL_PATH:$PATH"
These steps should solve any remaining issues. If this doesn't work, see if there is someone where you work that understands your local computing environment that can help explain any missing or different steps.
IHTH

Resources