How to determine the userid f person who has opened the table in UNIX - unix

Is it possible to find out the userid of person who has opened the table while updation on that table goes on. I generally gets an error like "A lock is held by process 28036".
It will really helpful if anyone can guide me on this.

Try ps -fp 28036 | tail -1 | awk 'print $1'. That should give you the username of the owner of the process. It also does not require SU privileges.

Related

mariadb query results separated by semicolon

So, I know nothing about Databases and just need an information inside one of it tables. I need to count duplicates assets with the same name. I am not using .sql files with queries inside it. Before, I had to login in the database and run this command. I want to automate is using another script that I already built.
So, this is what I am using now:
mysql -udb_tailstate -proot db_tailstate -e "SELECT name, COUNT(*) FROM ue GROUP BY name HAVING COUNT(*) > 1;"
And the result:
+------+----------+
| name | COUNT(*) |
+------+----------+
| ue1 | 2 |
+------+----------+
Since the result show is always more than 1, I would like to run something similar to this ( in one sentence) to show only the name of the asset that is duplicated.
I bet that for you geniuses guys this is easy.
Thanks in advance
CM

How to add items in DynamoDB via console?

I am looking to add some items into DynamoDB via console. (Please see screenshot below). When I click the "Save" button, nothing is happening. No new items are getting created. I also checked the DynamoDB JSON checkbox to convert the JSON into DynamoDB compatible JSON and clicked save button again, but nothing is happening. Can someone please advise, what am I doing wrong ? There are no error messages either.
You haven't provided your table definition so it's difficult to say exactly what a valid item for your table would look like but I can tell you for sure that:
1) You shouldn't be creating an array of JSON objects: each item you create must be an individual valid JSON object. Like so:
{
"sub": 1234,
"EventID": ["B213", "B314"]
}
2) Each item you create must include attributes matching the item schema for your table. This means that if your table has just a partition key defined then each item must include one attribute whose name matches the name of the partition key. If the table has both partition and sort key then each item you create must include at least two attributes, one matching the partition key, the other matching the sort key. And finally, the partition and sort keys must be string or numeric.
Assuming your table has a partition key called sub and no sort key, then the item example above would work.
update
Based on the comment it sounds like the OP was looking for a way to insert multiple items in a single operation. This not possible with the console, and actually it goes deeper than that: Dynamo fundamentally operates one a single item at a time for write operations. It is of course possible to batch up to 25 item writes using the API but that is just a convenience.
If you need to add multiple items to your table, consider writing a small script using the AWS CLI or the API. It’s is relatively easy to do!
The scripting solution looks something like this:
aws-dynamodb-upload-json.sh
#!/bin/sh
set -e
# parse
if [[ $# -eq 0 ]]; then set -- "--help"; fi
if [[ "$1" = "--help" ]]; then
echo "Usage:"
echo " aws-dynamodb-upload-json {table-name} {file.json}"
exit 1
fi
# config
AWS_DYNAMODB_UPLOAD_TABLE_NAME=$1
AWS_DYNAMODB_UPLOAD_FILE_INPUT=$2
echo "Configuration"
echo "AWS_DYNAMODB_UPLOAD_TABLE_NAME=$AWS_DYNAMODB_UPLOAD_TABLE_NAME"
echo "AWS_DYNAMODB_UPLOAD_FILE_INPUT=$AWS_DYNAMODB_UPLOAD_FILE_INPUT"
# main
jq -c '.[]' < $AWS_DYNAMODB_UPLOAD_FILE_INPUT |
while read -r row
do
echo "Entry: $row"
echo ""
aws dynamodb put-item \
--region us-east-1 \
--table-name $AWS_DYNAMODB_UPLOAD_TABLE_NAME \
--item \
"$row"
done
This does rely on the aws CLI and the jq CLI to be installed and on your $PATH.
Hopefully AWS adds an easier way to do this VIA the web interface someday.

SQLite update bit in column

We have an integer column in our SQLITE DB. I'd like to update all the records in the database to set a specific bit in this integer. Is there an easy way to do it in one SQL command?
E.g. in MySQL you could do something like: "UPDATE users SET permission = permission | 16;"
Turns out you can do exactly the same.
UPDATE users SET permission = permission | 16;
Works great.

Postgres: org.postgresql.util.PSQLException: ERROR: insufficient data left in message

I read enough to know that this occurs when a string contains some characters that Postgres doesn't like. However, I cannot figure out if there is a way to validate strings before writing them. In particular, I'm doing batch inserts.
insert into foo(col1,col2,col3) values ('a',2,3),('b',4,0),....
My DB is setup like this:
Name | Owner | Encoding | Collate | Ctype | Access privileges
------------+--------+----------+---------+-------+-------------------
stats | me | UTF8 | C | C |
Periodically, some bad string will get in and the whole insert will fail(e.g. change���=). I batch up quite a few values in a single insert so I'd like to ideally validate the string rather than bomb the whole insert. Is there a list of which characters are not allowed in a Postgres insert?
Using postgresql-jdbc 9.1-901.jdbc4
This message means that your string data has a null character "\0" in it.
I can't find an authoritative cite for this (let me know if you have one).
It is discussed at https://www.postgresql.org/message-id/alpine.BSO.2.00.0906031639270.2432%40leary.csoft.net
It is mentioned in passing in the official docs at https://www.postgresql.org/docs/9.3/static/functions-string.html
All other characters are allowed.
date type is not match target type.for example int4->int8
On my case, I was loading query from an sql file. The problem was due to the encoding.
I change it to UTF-8 and it works. Hope that helps !

Checksum for a SQLite database?

I'd like to be able to tell whether a SQLite database file has been updated in any way. How would I go about implementing that?
The first solution I think of is comparing checksums, but I don't really have any experience working with checksums.
According to http://www.sqlite.org/fileformat.html SQLite3 maintains a file change counter in byte 24..27 of the database file. It is independent of the file change time which, for example, can change after a binary restore or rollback while nothing changed at all:
$ sqlite3 test.sqlite 'create table test ( test )'
$ od --skip-bytes 24 --read-bytes=4 -tx1 test.sqlite | sed -n '1s/[^[:space:]]*[[:space:]]//p' | tr -d ' '
00000001
$ sqlite3 test.sqlite "insert into test values ( 'hello world');"
$ od --skip-bytes 24 --read-bytes=4 -tx1 test.sqlite | sed -n '1s/[^[:space:]]*[[:space:]]//p' | tr -d ' '
00000002
$ sqlite3 test.sqlite "delete from test;"
$ od --skip-bytes 24 --read-bytes=4 -tx1 test.sqlite | sed -n '1s/[^[:space:]]*[[:space:]]//p' | tr -d ' '
00000003
$ sqlite3 test.sqlite "begin exclusive; insert into test values (1); rollback;"
$ od --skip-bytes 24 --read-bytes=4 -tx1 test.sqlite | sed -n '1s/[^[:space:]]*[[:space:]]//p' | tr -d ' '
00000003
To be really sure that two database files are "equal" you can only be sure after dumping the files (.dump), reducing the output to the INSERT statements and sorting that result for compare (perhaps by some cryptographically secure checksum). But that is plain overkill.
Depending on the size of the database, continually polling and generating a checksum may be a bit too intensive of the machine.
Have you considered monitoring the last modified meta data stored on the OS file system instead?
If one of the sqlite databases is used only as a copy (read only) and you want to check whether the original database file has been updated so you can update the copy (from the web for example, without having to download the orginal if it is not different from the copy), then you may just compare the first 100 bytes of both database files (database headers) http://www.sqlite.org/fileformat.html
For bytes 24..27 of the database header, SQlite doc says:
24..27 4
The file change counter. Each time a database transaction is committed, the value of the 32-bit unsigned integer stored in this field is incremented
After some testing it appears that the file change counter is not incremented when a database transaction is committed but only contains select statements, which is the behavior you want in case you wrap your selects in a transaction and commit to end the transaction.
A bit late an answer maybe, 10+ years later, but beside the file change counter, people might be interested in the dbhash utility (from the SQLite project itself), which provides a logical SHA1 checksum of an SQLite database, which is immune to physical differences.

Resources