How to have multiple values per row - sqlite

I've set up a database with sqlite using perl and I'm trying to figure out how to add multiple values on each row.
I've been trying to change my INSERT INTO statement but have had no success.
#Here I create the database.
$dbh->do("
CREATE TABLE probes(
source CHAR(15) NOT NULL,
port CHAR(5) NOT NULL,
PRIMARY KEY(source,port))")
#This is my prepare statement that I think needs to be changed.
my $sth = $dbh->prepare("INSERT INTO probes (source, port) VALUES(?,?)");
For example I have a log file that was taken from a scan done, I have a source IP and port number. I want the database to show
Source: Port:
127.0.0.1 5678 5839 5938
Instead of it showing like this:
Source: Port:
127.0.0.1 5678
127.0.0.1 5839
127.0.0.1 5938

You store one row per scan like you are now, and use grouping and aggregation to get one row per source IP when you're ready to display the data. Something like:
SELECT source, group_concat(port, ' ') AS ports
FROM probes
GROUP BY source;
db<>fiddle example

Related

Why uuid() creates very similar identifiers in MariaDB?

I am creating a code generator for MariaDB to create a database based on a given JSON.
In that JSON, some initial data also exists. Thus I loop over data and insert them into the database.
Some columns have uuid() default value.
Here's the result of my code inserting data into such a table:
Id,Guid,Key,Order
1,5c52e1db-6809-11ec-982c-0242c0a81003,New,
2,5c530e55-6809-11ec-982c-0242c0a81003,WaitingForBusinessResponse,
3,5c533551-6809-11ec-982c-0242c0a81003,WaitingForUserResponse,
4,5c536433-6809-11ec-982c-0242c0a81003,UnderInvestigation,
5,5c538ba5-6809-11ec-982c-0242c0a81003,Closed,
As you can see UUID values are very very close to each other. This column has a unique index on it, so no duplicate entries would be allowed. But these values make it difficult to track them and one might easily confuse them with each other.
Is there a way to change this behavior? I want to tell MariaDB to create UUID more randomly.
The simple answer is "no, you can't" unless you write your own uuid function which provides an algorithm which creates a uuid() from random or pseudo random number as described in Chapter 4.4 of RFC 4122
The uuid() function of MariaDB (and MySQL) was implemented according to RFC 4122, but uses the algorithm for creating a time based uuid (see chapter 4.2)
Since all algorithms (name based, time based, random) deliver an Universal Unique Identifier which is globally unique in space and time, I don't really understand why you want to change the algorithm from time to random.
Time based uuids using uuidgen and mariadb:
~$ uuidgen -t;uuidgen -t
a5d3c032-6865-11ec-bd1f-1740cb8be951
a5d42d24-6865-11ec-bd1f-1740cb8be951
~$ mariadb -e"select uuid()\G";mariadb -e"select uuid()\G"
*************************** 1. row ***************************
uuid(): 45aca397-683c-11ec-a913-d83bbf89f2e2
*************************** 1. row ***************************
uuid(): 45ad94dd-683c-11ec-a913-d83bbf89f2e2

MySQL bigint storage inconsistencies when converting IPv4 to IPv6 / Signed versus Unsigned

I have the following MySQL database table:
CREATE TABLE `example` (
`id` INT(11) NOT NULL AUTO_INCREMENT,
`ip` BIGINT(11) NOT NULL,
`ipv6` VARBINARY(16) NOT NULL
PRIMARY KEY (`id`)
);
My goal is simply to copy/convert the existing IPv4 IP addresses in to IPv6 format in the new ipv6 column. So I run the following query which worked just fine in all my test cases:
UPDATE example SET ipv6 = INET6_ATON(INET_NTOA(ip));
This should be simple right? No... After processing 1,083 records MariaDB returns the following error:
Column 'ipv6' cannot be null.
I'm thinking odd so I decide to start verifying the data:
There are 1279 records in this table.
All records contain a value for the ip column so that seems good. So I scroll down to the first record which did not convert. It has a value of 40036798809 which is 11 numbers so that should match up with INT(11) right?
However the second row that was not processed (keeping in mind that MySQL naturally executed the UPDATE query going in the ascending order of the primary key id) that the ip value for that record is 10317637058914 which is 14 numbers long which is not supposed to be possible in an INT(11) field, correct?
I see some other integers that are clearly exceeding the integer length so I decide to ORDER the table by the ip in HeidiSQL and then suddenly the highest value record for the ip column is 1202623438. That is ten numbers in length. phpMyAdmin also shows the larger number however I have switched to HeidiSQL since I find it's GUI is superior for my local development.
After some research it appears that the length of the datatype for BIGINT has nothing to do with column range. HeidiSQL changes the ip column values by simply changing the ORDER!
My next step after continued reading was to check whether the column is signed or unsigned. As HeidiSQL shows that the ip column is not checked for Unsigned that implies that the ip column is signed and therefore it's maximum value is (all commas added only for visualization, actual values are purely numeric) 2,147,483,647 while the value that would not parse during the UPDATE query is 40,036,798,809.
Earlier research suggested that if a number in the column is larger than what is allowed (not sure why that would even be allowed?) then it would be treated as the maximum allowed value (I imagine in this case 2,147,483,647); is this true?
The Question(s)
In summation: why won't the UPDATE query parse the entire table?
Which depends on: what is the problem with MySQL and/or HeidiSQL with storing values?
Will MariaDB / MySQL allow the storage of numbers larger than what the table structure allows?
How is is the value 40,036,798,809 (again, without commas) being treated during the UPDATE query if the ip column type is BIGINT(11)?
What would be the effective value stored for the highest valid IPv4 IP address (255.255.255.255)?
I presume when I ORDER in HeidiSQL that it is showing the effective highest value; is this best-guess accurate?
Your number 40,036,798,809 seems to be larger than the maximum possible ipv4 address 255.255.255.255, which yields 4.294.967.295 (or 0xFFFFFFFF) or largest unsigned longint. So it cant possibly be an IPV4 address, hence the null, hence the error.
Maybe some of your numbers are not IPV4 addresses ?
As for your MySQL Integer column sizes, I sense some confusion there with its byte sizes. You can read up at What is the size of column of int(11) in mysql in bytes? on that subject.
Your conversion is correct; your data is wrong. IPv4 involves 32-bit numbers; you have things stored in your BIGINT bigger than 32 bits.
This will locate the bad rows:
SELECT ip FROM example WHERE ip > INET_ATON('255.255.255.255');
You will probably find 196 (1279-1083) bad rows.
The (11) is totally unused and irrelevant (unless you have ZEROFILL).

Redis Multiple Key Set Counts

So I add keys to my Redis implementation for wallpaper view counts like this...
(the values are there for demonstration purposes but the overall format is the same)
SADD wallpapers:100:2015-12-31 "127.0.0.1"
SADD wallpapers:100:2016-01-01 "127.0.0.1"
SADD wallpapers:100:2016-01-01 "192.168.1.1"
SADD wallpapers:100:2016-01-02 "127.0.0.1"
So that should add the IP's in the associated sets. So my question is, do they allow some sort of pattern based counts?
SCARD wallpapers:100:2016:01-01
For example the above command would return "2", as there are two IPs stored in the set, but is there a way to run something like the below command to get all counts for all the dates?
SCARD wallpapers:100:*
Actually it's easier than you've ever thought: store less specific sets to be able to get what you want.
For example, if you need wallpapers:100:* it means that you just need a set called wallpapers:100 where you store unique IP addresses there.
That is, whenever you add an IP addresses to one of specific sets (i.e. daily sets), also add it to the global set for a given wallpaper identifier.
Redis is like working with a manual index. Index your data in a way you can efficiently use it. That's all! This means that data redundancy is a good approach.
EVAL "local total = 0 for _, key in ipairs(redis.call('keys', ARGV[1])) do total = total + redis.call('scard', key) end return total" 0 wallpapers:100:*
This command returns you the total number of elements in keys wallpapers:100:*.
If you want total number of unique values from all keys combined,
EVAL "return redis.call('SUNIONSTORE', 'wallpapers:temp', unpack(redis.call('keys', ARGV[1])))" 0 wallpapers:100:*
This will return the number of unique values from all keys combined and also creates a key wallpapers:temp
You can delete this key later del wallpapers:temp
I used SUNIONSTORE for the second command.
Refer EVAL.

sql equivalent to "grep x | head -n1"

I'm beginning to learn sql and have created an sqlite database of what was previously a text file of my daily ip address assignments from Comcast. If I wanted to find the first date that an ip address was assigned with the text file, I could:
cat, awk, sort, for/do, grep and head -n1
to get a list of the first dates any particular ip address was assigned. How can I do that with sql?
select distinct ip from history;
does not display the date column, and
select distinct ip, date from history;
returns all the db entries. What am I not doing? Thanks.
Your question is exactly a duplicate of this: SQL query to select distinct row with minimum value. You are trying to select the minimum value in the date column (the start date) for every unique IP. You will need to do an inner join and use group by.

Using Spring JDBCTemplate with Postgres to query ip addresses

I have a table that has IP Addresses (v4) stored as varchar (I Cannot change this)...
I'm trying to query for a range of IP addresses... like this:
select colA, colB from table where cast(ipaddress as inet) >= ?
and I'm passing in to my PreparedStatement:
"'1.1.1.1'::inet"
I've also tried:
"cast('1.1.1.1' as inet)"
and
"inet '1.1.1.1'"
I get an error that my type is not correct for type inet.
I've also tried to create an InetAddress for the ip address and pass that in as my arguments which gives me a whole other error.
Has anyone else had this same problem and conquered it?

Resources