why innodb primary selection prioritize lower version of server? - innodb

I am reading MySQL document about InnoDB single-primary mode. It claims as follows that the first factor of selecting the next primary is the version of the server. The weight and UUID comes after the version. So what is the reason for this? My guess is that a higher version primary server if used as primary node, can have features that lower version nodes can hardly accept, but what's that?
The first factor considered is which member or members are running the lowest MySQL Server version. If all group members are running MySQL 8.0.17 or higher, members are first ordered by the patch version of their release. If any members are running MySQL Server 5.7 or MySQL 8.0.16 or lower, members are first ordered by the major version of their release, and the patch version is ignored.

Backward compatibility is usually built into any release. The Master is somewhat in charge. So, it is reasonably safe for a Slave to be running a "newer" version.
Without this convention, it is hard to release incompatible features -- the Master would need to negotiate with the Slaves to decide what old protocol to use.

Related

Can't Grant SLAVE MONITOR to user on MariaDB 10.6 Primary

We recently replaced an old MariaDB 10.3 primary with one of it's replicas which is running 10.6.x. Hoping that this would resolve a weird Primary/replica issue that we have had since creating the replicas.
The Struggle:
Per the MariaDB documentation in order for a user to have access to SHOW REPLICA STATUS (formerly SHOW SLAVE STATUS) in MariaDB 10.3 that user needed the REPLICATION CLIENT privilege. Furthermore REPLICATION CLIENT was renamed to BINLOG MONITOR in mariadb 10.5.2 and this Privilege does show up as BINLOG MONITOR when granting the REPLICATION CLIENT privilege on versions of 10.5.2 or newer. However, according to the mariadb kb (and confirmed by my experience) "Unlike REPLICATION CLIENT prior to MariaDB 10.5, SHOW REPLICA STATUS isn't included in this privilege, and REPLICA MONITOR is required". This has created a bit of a headache for me.
The old problem:
Due to the cups and ball trick MDB has decided to play with the SHOW REPLICA STATUS privilege I couldn't grant REPLICA MONITOR on the old primary without getting an error (because that privilege doesn't exist on 10.3) and REPLICATION CLIENT wasn't sufficient on the replicas (because SHOW REPLICA STATUS was moved to REPLICA MONITOR). This lead me to EOL the old primary and promote one of the 10.6 replicas to primary.
The new problem (or just the old problem persisting):
The problem however is the new primary which is running 10.6 is behaving almost exactly like the old primary (which, again was on 10.3). The only difference is when I grant REPLICA MONITOR now I don't get an error but the grant doesn't stick. I can FLUSH PRIVILEGES and SHOW GRANTS... on the user but it isn't there.
So the question is what would cause a mariadb 10.6 Primary to behave like the former 10.3 primary in this scenario? is there some config or system variable I am unaware of?
FWIW the machine was rebooted a few times during the fail-over process but if that is the fix it can be done again. I have also tried granting SLAVE MONITOR which is the former version of REPLICA MONITOR but it doesn't stick either. I also tried granting BINLOG MONITOR which does stick but as I've already covered isn't sufficient on 10.6.

How many transactions can Corda save in its h2 database?

We need to save about 10 millions transactions (each transaction will save one HASH of a electronic contract) per day and keep them for 3 years at least.
Because Corda can only support h2 database currently, we want to know if Corda can not support saving so many tranactions due to the limit of the database or somethig else?
The open source version of Corda was originally coded against the h2 database. We have had community contributions which enable Corda to run against postgres - although please be aware that this is a contribution so we run all of our testing against h2 and so we may not immediately pick up issues when running against the postgres database platform.
However, "R3 Corda" will support a variety of fully pluggable databases. If you intend to use Corda in a production environment, then I would recommend you use this version for deployment. Any code written against O/S Corda (or running on an O/S Corda node) will be fully compatible with the R3 Corda platform.

libvirt cpu-mode='host-model' confuses while mapping cpu models?

I have physical host which has cpu model 'Intel(R) Xeon(R) CPU E5-2670 v3 # 2.30GHz' and it has 'avx2' flag in cpuinfo. The host has kvm/qemu hypervisor and libvirt configured. I set cpu mode as host-model in domain XML. Guest vm can be created on the host. When I check cpu model of guest vm, it shows as 'SandyBridge' and it also has 'avx2' flag in cpuinfo. But 'SandyBridge' does not support 'avx2' flag but 'Haswell' model does support. It is just due to host-model mode, libvirt finds nearest cpu model to 'Intel(R) Xeon(R) CPU E5-2670 v3 # 2.30GHz' as 'SandyBridge' but it should show 'Haswell' instead. Does that mean libvirt have a bug or it is valid representation in this scenario? I am using libvirt version 1.2.2
Within a particular chip generation (SandyBridge, Haswell, etc), Intel does not in fact guarantee that all different models it makes have the same CPU flags present. We can see this with Haswell or later where some CPUs have the TSX feature and some don't. QEMU/libvirt generally only provide a single model for each Intel generation though, so its possible that your physical CPU might not actually be compatible with the correspondingly named QEMU model.
From libvirt POV, the names are just a shortcut for a particular group of features. As such when identifying the CPU for "host-model" libvirt completely ignores names, and just looks for the CPU whose list of features is most closely related to your host CPU, and then lists any extra CPU features explicitly in the XML. So all this means that even though you have Haswell as your physical CPU, it is entirely possible that libvirt will display a different model name for your guest. There's nothing really wrong with this from a functional POV - the features should all be present still (except for a few that KVM intentionally blocks), it is merely a bit "surprising" to look at.
In your case, what I think is going on is due to the bug in Intel TSX support. This feature was introduced in Haswell, but then blocked in a microcode update after Intel found out it was broken. This causes the 'tsx' feature to disappear from the CPU model in your physical machine. The libvirt/QEMU Haswell CPU model still contains 'tsx', so this means libvirt won't match against your Haswell CPU. In libvirt >= 1.2.14, we introduced a new Haswell-noTSX CPU model to deall with this particular problem, but you say you only have 1.2.2. SandyBridge is simply the next best compatible CPU model that libvirt can find for you.
I found another workaround which doesn't require to upgrade libvirt. I removed hle and rtm flags from the definition of Haswell in cpu mapping xml file used by libvirt (/usr/share/libvirt/cpu_map.xml). And then I restarted libvirt process. Then I rebooted VM and it showed correct model name as Haswell.

How to fix FQDN Mismatch for intel AMT system with Intel SCS 10

I have two systems on my domain and have configured Intel AMT with SCS. However I had need to change the Host Name on both systems and afterwards the SCS database is not getting updated correctly after a maintenance Task. The DB still shows old FQDN's and discovery is saying there is a mismatch error. How do I resolve this?
The source used to configure the FQDN setting (hostname.suffix) in the Intel AMT device is defined in the configuration profile. The profile includes several options that you can use to define how the FQDN of the device will be constructed.
When changes are made to the host computer or the network environment, the basis on which the FQDN setting was constructed might change. These changes can include changing the hard disk, replacing the operating system, or re-assigning the computer to a different user. If the FQDN setting in the Intel AMT device
is not updated with these changes, problems can occur.
Intel SCS includes options that you can use to detect and fix these “mismatches”.
Intel AMT configuration is bounded to your platforms HW, since the records on the SCS DB are currently in a mismatch status due with the information in your AMT host in order to remedy the situation you will require to perform the following procedures in order to fix the mismatch:
Download ACUConfig.exe to your AMT host and run the following command: ACUConfig.exe SystemDiscovery /ReportToRCS /AdminPassword <password> RCSAddress <RCSAddress> in your AMT host platform, where the values in pointy brackets need to be replaced with the values for your environment.
Under the Monitoring > Views tab you will see the system that was detected to have a missmatch, in order to reconcile the records in the DB you will need to perform one more action.
Create a Job. In the Job Definition window, select these options:
From the drop-down list in the Filter section, select Host FQDN Mismatch.
From the Operation drop-down list, select Fix host FQDN mismatch.
Now all that is left to do is run the job using the context menu on the recently created job. You can monitor the log of the AMT host log for more details and also you will see that the record in the Mismatch View gets cleared.
Good Luck!

Remotely Verifying the Application in execution

Is it possible to prove to the remote party that the application I am running in my system is the same as I am claiming that I am running using DRTM or SRTM? If yes then How?
Theoretically: yes. The concept is called remote attestation.
The basic idea is: First you have a sound chain of trust built on your platform, like:
BIOS ==> Boot loader ==> OS ==> Applications
The resulting measurements are stored in the PCRs.
Now you can let the TPM sign this set of PCRs, that's called quote.
You can submit this quote to a remote entity. Here the problems start:
How can you proof that the quote was signed by a hardware TPM and not an emulator?
Possible solutions: pre-shared keys or some kind of CA.
How can you be sure that the PCR values represent a trusted system state?
That's not so easy. If you have SRTM, you have to consider every possible combination of
how your system load the components. E.g. in BIOS-phase, in which order are the
option-ROMs loaded?
Here DRTM comes for the rescue, but it makes the matter just slightly easier. With DRTM
you can forget about all the pre-DRTM stuff. If you have a small trusted environment,
say like flicker, then you'll have a manageable set of trusted configurations.
If you have a full-featured OS, than it's hard.
First, you have to find an OS that measures everything. IBM's IMA for the Linux
kernel is one example.
Then, the slightest difference in the order of loaded components will lead to
different PCR values. Furthermore consider all the combinations of states the
different installed software packages might be in.
Possible solutions are to restrict the possible set of PCR values that represent a
valid configuration. For example you can measure a whole OS image instead of each
binary. An example is the acTvSM platform published a few years ago.
Conclusion: There is no easy, off-the-shelf solution, but you can design a system such that it fits your requirements.

Resources