How to set -n (number of users ) for database server? - openedge

We have increased the -n parameter in broker/db.pf file.We restarted the server and when we check in promon its still showing the same number of users. How do we increase the -n parameter?

I know you answered this yourselves but for future users a real answer can be good. There are several ways to set parameters like -n. This answer really applies to changing all startup parameters (but not what values are "good").
How you change this value depends on how you start your database. See below.
NB 1: you should be aware of your licensing plan before changing this number and contact your sales contact if needed.
NB 2: you should be aware that changing startup parameters can affect performance etc. Test new values in a separate environment before moving them to production.
NB 3: backup all files before messing around...
Managed Database
A managed database is a database that is handled by the AdminServer. OE Management is not needed for this approach. A working installation of OE Explorer is however recommended.
The managed database is started (and stopped etc) via either the web based OE Explorer interface or the dbman command line utility.
Settings are stored in conmgr.properties under your Progress installation. You can edit this file manually (save a copy first...) or via the OE Explorer (recommended way).
You will have a line like this in the file:
maxusers=20 # -n
Edit the number to your liking with your favourite editor.
You can also change this in the OE Explorer:
Log in to OE Explorer. Default location is http://servername:9090/.
Locate and click on the database (if it's not there it's not handled by the adminserver - see below).
Select Configuration
Select Configuration (again, not "servergroup")
Click EDIT
-n (or Max users) is located in the first group of settings ("General"). See picture below.
Edit the value and don't forget to save.
Scripted Database
A scripted database is a database that started with a custom script (or also directly from command line). The actual startup could be handled by crontab, a user, the server generic startup script etc.
The OE AdminServer is not "aware" of this database. (You can make the AdminServer "a little" aware of it by running the dbagent command line utility with certain parameters. Read more about this in the manual).
You could generally divide into two ways of handling the script: with parameters in it or with parameters in a separate parameter file (often with the extension .pf).
Script with parameters in it
With this approach you store all parameters in the actual startup script.
proserve <dbname> -H <hostname> -S <serviceport> -n 10 -B 10000 -spin 10000 etc..
Script with a separate parameter file
With this approach you store the parameters in a separate file.
proserve <dbname> -pf /path/to/file/file.pf
The .pf-file can be formatted like the parameters in the command line:
-db <dbname> -H <hostname> -S <service> etc.
Or with newlines (this allows for comments in the file):
# Main database
-db <dbname>
-H <hostname>
-S <service>
You can also mix these two approaches.
Sources:
OE Management and OE Explorer
OE Database Management

Related

Question about differences using fscrypt on ubifs compared with ext4

I am working on an embedded Linux project that can run on multiple
platforms. One uses e.MMC for storage and another NAND flash. I want
to encrypt all the filesystems (mainly to protect against someone
unsoldering the flash chips and putting them in a reader). I want to
try and maintain a common approach across both hardware types as far
as possible. One big difference between the two is the wear levelling
is in the hardware for the e,MMC whereas for NAND I'll be using UBI.
For the root filesystem I am thinking of using squashfs which is
protected using dm-crypt. For the NAND device I have tried this out
and I can layer dm-crypt on top of ubiblock then use the device mapper
to load the squashfs. This maps nicely to the e.MMC world with the
only difference being that the device mapper is on a gpt partition
rather than a ubiblock device.
My challenge is for other read/ write filesystems. I want to mount an
overlay filesystem on top of the read-only root and a data partition.
I want both of these to also be encrypted. I have been investigating
how fscrypt can help me. (I believe dm-crypt won't work with ubifs).
For filesystems on the e.MMC I will be using ext4 and for NAND
ubifs. The documentation says both of these support fscrypt. I've struggled
a bit to find detailed documentation about how to use this with ubifs
(there is a lot more for the ext4) but I think that there are some
differences between how this has been implemented on each and I'd like
those who know more to confirm this.
On the NAND side I have only been able to get it to work by using the
fscryptctl tool (https://github.com/google/fscryptctl
) as opposed to the fuller featured fscrypt tool
(https://github.com/google/fscrypt). This was following instructions I
found in a patch to add fscrypt support to mkfs.ubifs here:
https://patchwork.ozlabs.org/project/linux-mtd/cover/20181018143718.26298-1-richard#nod.at/
This appears to encrypt all the files on the partition using the
supplied key. When I look at fscrypt on ext4 it seems here that you
can't do this. The root directory cannot itself be encrypted, only
sub-directories. Reading here:
https://www.kernel.org/doc/html/v4.17/filesystems/fscrypt.html it
says:
"Note that the ext4 filesystem does not allow the root directory to be
encrypted, even if it is empty. Users who want to encrypt an entire
filesystem with one key should consider using dm-crypt instead."
So this is different. It also seems that with ubifs I can't apply
encryption to the subdirectories like I could in ext4. The README.md
here https://github.com/google/fscryptctl gives an example using ext4.
This encrypts a subdirectory called test. I don't see how to do the
same thing using ubifs. Could someone help me?
I've been using the NANDSIM kernel module for testing. At the end of
this post is a script for building an encrypted overlay ubifs
filesystem. As you can see the mkfs.ubifs is taking the key directly
and it appears to apply it to all the files on the partition. You
can't then apply policies to any sub-directories as they are already
encrypted.
I would like to use some of the other features that the userspace
fscrypt tool provides e.g. protectors (so I don't need to use the
master key directly). I can't however see any way to get the userspace fscrypt
tool to setup encryption on a ubifs. The userspace fscrypt command
creates a .fscrypt directory in the root of the
partition to store information about policies and protectors. This
seems to fit more with the ext4 implementation where the root itself isn't encrypted.
When I try to set-up an unencrypted ubifs with "fscrypt setup" I run
into trouble as making a standard ubifs seems to create a v4 ubifs format
version rather than the required v5. This means the "fscrypt encrypt"
command fails. (Errors like this are seen in the dmesg output
[12022.576268] UBIFS error (ubi0:7 pid 6006): ubifs_enable_encryption
[ubifs]: on-flash format version 5 is needed for encryption).
Is there some way to get mkfs.ubifs to create an unencrypted v5 formatted
filesystem? Or does v5 mean encrypted?
Here is my script to create an encrypted ubifs using the fscryptctl tool:
#!/bin/bash
MTD_UTILS_ROOT=../../mtd-utils
FSCRYPTCTL=../../fscryptctl/fscryptctl
MOUNTPOINT=./mnt
dd if=/dev/urandom of=overlay.keyfile count=64 bs=1 # XTS needs a 512bit key
descriptor=`$FSCRYPTCTL get_descriptor < overlay.keyfile`
$MTD_UTILS_ROOT/mkfs.ubifs --cipher AES-256-XTS --key overlay.keyfile
-m 2048 -e 129024 -c 32 -r ./overlay -o overlay.enc.img
$MTD_UTILS_ROOT/ubiupdatevol /dev/ubi0_6 overlay.enc.img
# Try it out
$FSCRYPTCTL insert_key < overlay.keyfile
key=`keyctl show | grep $descriptor | awk '{print $1}'`
mount -t ubifs /dev/ubi0_6 $MOUNTPOINT
ls $MOUNTPOINT
umount $MOUNTPOINT
keyctl unlink $key
NB I've been working with mtd-utils v2.1.2 on a 5.4 kernel.

Can the qpad queries be recovered if qpad is closed but the port is open?

I ran multiple queries but before saving them, the qpad crashed. However the q-port on which these queries were running (on my windows machine) is still open. I can recover the variables and functions by \v and \f respectively.
Is there a way to recover all the q statements I ran using qpad? I forgot to maintain a log file, hence I am trying to find a way to recover queries using q-port.
Thanks
Unfortunately there's no way to retrieve your old queries for the reasons Davis.Leong said. But if you can't/don't want to create a table on your server to save them, you can also check the log queries box in QPad settings:
Q > Settings > Editor > Log queries to "queries_date.log"
Now when you run queries, they will be written to this log file in the same directory as QPad.exe, along with the server and timestamp, like this:
/ 02/26/19 09:54:52 on `:localhost:1234:: from QPad1*
show `logthis
/ 02/26/19 10:03:03 on `:localhost:1234:: from QPad1*
a:10
Unfortunately I don't think there is a way to retrieve your command history. Others has already mentioned why so I will not go into that. You can easily maintain a log file in the future however:
When you start your server, adding the -l flag will allow you to define a path to a log file. Any commands sent to the server from the client will now be logged. For example
q ../log/logtest -l -p 5555
t:([]date:`date$();sym:`sym$();price:`float$())
will start a q process listening on 5555, logging any messages that cause the server to update. So if I open a handle to 5555 in another q session h:hopen `::5555
and
update table t
q)h"insert[`t](2000.01.01;`appl;102.3)"
,0
the server will have updated t like so
q)t
date sym price
---------------------
2000.01.01 appl 102.3
There will be a log file created which will show any commands sent to the server. NOTE however it will only log those commands that change the state of the server's data.
This log file can be reloaded in the event of a server crash using the same command as before.
The answer is no. qpad is the GUI that interact with the q process. The reason why you can retrieve the variable and function is because the process did not die. For the query, in default q will not save that, unless when you customize your .z.pg to upsert a record in a queryHistory table.
e.g.
q).z.pg:{[x]`queryHistory insert ([]queryTime:.z.P;query:enlist x)}
q)queryHistory:([]queryTime:`timestamp$();query:())
q)10+10
20
q)testTab:([]sym:10?`1;val:10?100)
q)queryHistory
queryTime query
---------------
queryHistory is not append with record as this is being done in q process itself, if you do it in your qpad:
10+10
testTab:([]sym:10?`1;val:10?100)
you can see there will be record append, so even your GUI is crashed, you can trace the query
q)queryHistory
queryTime query
-------------------------------------
2019.02.26D17:32:38.471063000 "10+10"
q)queryHistory
queryTime query
----------------------------------------------------------------
2019.02.26D17:32:38.471063000 "10+10"
2019.02.26D17:32:52.790863000 "testTab:([]sym:10?`1;val:10?100)"
Got to know recently, there is a backup of your q scripts at "c/users//Appdata/local" and are autosaved every 5-6 mins.These are temporary files which are deleted when you save the script. However if your qPad crashed, you can find your files here :)

Docker: unix "who" command doesn't work inside container

I have a Docker image that has one non-root user created named builder.
The application that supposed to run inside the container uses unix who command.
For some reason it returns empty string inside the container:
builder#2dc3831c558b:~$ who
builder#2dc3831c558b:~$
I cannot use whoami because of implementation details.
(I'm using Docker 1.6.2 on Debian Jessie)
EDIT (additional details regarding why I use "who"):
I use the command who with the parameters am i, that is who am i. This suppose to return the user who first made the login. So, for example, sudo who am i returns builder, while sudo whoami returns root.
The command who includes options like -b: time of last system boot.
Since all commands from a container translates into system calls to the kernel, that would not return anything container related, but docker-host related (ie the underlying host).
See also "Difference between who and whoami commands": whoami prints effective username of being ran whoami, which is not the same as who (printing information about users who are currently logged in).
The current workarounds listed in issue 18547 are:
The registry configuration is stored in the client, so something as simple as cat ~/.docker/config.json will give you the answer you're looking for.
docker info | grep Username should give you this information.
But that is not the same as running the command from within a container session. id -u might be closer.
By default, there is no direct loggin when a container is started by the docker daemon.
As Auzias commented, only a direct ssh connection (initiating a login session) would allow who to return anything. But with docker, this is generally not needed since docker exec (for debug purposes) exists (and spare the image maintainer to include ssh unless it is really needed).

Short Webspeed Broker Query

I am trying to create a script that informs the user about the status of the website, which uses WebSpeed. I can use wtbman to output the status of the transaction server, not a problem. But I want something that just tells us the status of the transaction server.
Is there a command that I can use to achieve that, instead of writing a program to parse the returned string for wtbman?
There's no built in approach like for instance Virtual System Tables.
Parsing the output of
wtbman -name <broker> -query
is your best bet. The output isn't very hard to decipher so you should be able to do it quite quick!
Two other commands to check are:
wtbman -name <broker> -agentdetail <pid>
This will go more into detail of a specific agent.
wtbman -name <broker> -listallprops
Lists all settings for the broker.

Syncing images folder on two servers

Is there a way to sync the images folder between my live server and the staging server? so when a new image is added to the live server it would be copied automatically to the staging.
Im currently on rackspace servers "both of them".
You haven't mentioned what operating system you're using, or how immediate you want this to happen. I would look into using rsync. Set up login using ssh key authentication (instead of password), and add a cron job that runs it regularly.
On live, as the user that does the copying run this command:
ssh-keygen
(Leave the passphrase empty).
Next copy the public key to the staging server (make sure you don't overwrite existing authorized_keys file, if it already exists you have to append id_rsa.pub to that file):
scp ~/.ssh/id_rsa.pub staging-server:.ssh/authorized_keys
Finally set up the cron-job:
echo '15,45 * * * * rsync -avz -e ssh /path/to/images staging-server:/path/to' | crontab -
This runs your script quarter past and quarter to every hour. For more info on the cron format, see the appropriate man page:
man 5 crontab
To understand the rsync options, check the rsync manpage. This command won't remove images on staging when you remove images on your live server, but there are options for that.
Also, remember to run the command manually once as the user in question, to accept ssh server keys and make sure key auth is working.

Resources