The X509ChainStatusFlags enum contains a lot of possible values: https://learn.microsoft.com/en-us/dotnet/api/system.security.cryptography.x509certificates.x509chainstatusflags?view=netframework-4.8
Are there easy ways to construct a certificate and chain that produce some of these flags? I want to construct them in order to integration-test my certificate validation logic.
Each different kind of failure requires a different amount of work to test for. Some are easy, some require heroic effort.
The easiest: error code 1: X509ChainStatusFlags.NotTimeValid.
X509Certificate2 cert = ...;
X509Chain chain = new X509Chain();
chain.ChainPolicy.VerificationTime = cert.NotBefore.AddSeconds(-1);
bool valid = chain.Build();
// valid is false, and the 0 element will have NotTimeValid as one of the reasons.
Next up: X509ChainStatusFlags.NotValidSignature.
X509Certificate2 cert = ...;
byte[] certBytes = cert.RawData;
// flip all the bits in the last byte
certBytes[certBytes.Length - 1] ^= 0xFF;
X509Certificate2 badCert = new X509Certificate2(certBytes);
chain.ChainPolicy.ApplicationPolicy.Add(new Oid("0.0", null));
bool valid = chain.Build();
// valid is false. On macOS this results in PartialChain,
// on Windows and Linux it reports NotValidSignature in element 0
Next up: X509ChainStatusFlags.NotValidForUsage.
X509Certificate2 cert = ...;
X509Chain chain = new X509Chain();
chain.ChainPolicy.ApplicationPolicy.Add(new Oid("0.0", null));
bool valid = chain.Build();
// valid is false if the certificate has an EKU extension,
// since it shouldn't contain the 0.0 OID.
// and the 0 element will report NotValidForUsage.
Some of the more complicated ones require building certificate chains incorrectly, such as making an child certificate have a NotBefore/NotAfter that isn't nestled within the CA's NotBefore/NotAfter. Some of these heroic efforts are tested in https://github.com/dotnet/runtime/blob/4f9ae42d861fcb4be2fcd5d3d55d5f227d30e723/src/libraries/System.Security.Cryptography.X509Certificates/tests/DynamicChainTests.cs and/or https://github.com/dotnet/runtime/blob/4f9ae42d861fcb4be2fcd5d3d55d5f227d30e723/src/libraries/System.Security.Cryptography.X509Certificates/tests/RevocationTests/DynamicRevocationTests.cs.
Related
I'm using Pkcs11Interop Library and trying to test encryption and decryption with RSA_PKCS_OAEP mechanism.
CK_RSA_PKCS_OAEP_PARAMS p = new CK_RSA_PKCS_OAEP_PARAMS();
p.HashAlg = (uint)CKM.CKM_SHA_1;
p.Mgf = (uint)CKG.CKG_MGF1_SHA1;
p.Source = (uint)CKZ.CKZ_DATA_SPECIFIED;
p.SourceData = IntPtr.Zero;
p.SourceDataLen = 0;
CK_MECHANISM mech = CkmUtils.CreateMechanism(CKM.CKM_RSA_PKCS_OAEP, p);
Everything is OK with the above mechanism but if I change the hash algorithm to SHA-256 like below:
CK_RSA_PKCS_OAEP_PARAMS p = new CK_RSA_PKCS_OAEP_PARAMS();
p.HashAlg = (uint)CKM.CKM_SHA256;
p.Mgf = (uint)CKG.CKG_MGF1_SHA256;
p.Source = (uint)CKZ.CKZ_DATA_SPECIFIED;
p.SourceData = IntPtr.Zero;
p.SourceDataLen = 0;
CK_MECHANISM mech = CkmUtils.CreateMechanism(CKM.CKM_RSA_PKCS_OAEP, p);
Then I get CKR_ARGUMENTS_BAD exception. I have been searching and debugging for a while but found nothing.
I had the same problem with Luna HSM (but was given CKR_MECHANISM_PARAM_INVALID).
That version of HSM simply did not support OAEP with SHA-256 and firmware upgrade was needed. After firmware upgrade it worked without any problems. Check if your device supports this variant.
Your code seems ok, I used (in java):
CK_RSA_PKCS_OAEP_PARAMS mechanismParams = new CK_RSA_PKCS_OAEP_PARAMS(
CKM.SHA_1,
CKG.MGF1_SHA1,
new CK_RSA_PKCS_OAEP_SOURCE_TYPE(CKZ.DATA_SPECIFIED.longValue())
, null, 0
);
and
CK_RSA_PKCS_OAEP_PARAMS mechanismParams = new CK_RSA_PKCS_OAEP_PARAMS(
CKM.SHA256,
CKG.MGF1_SHA256,
new CK_RSA_PKCS_OAEP_SOURCE_TYPE(CKZ.DATA_SPECIFIED.longValue())
, null, 0
);
Good luck!
I am using Embarcadero RAD Studio 10. I am trying to use Indy client/server components in my application.
I want to adjust the TCP/UDP server IP address and port at runtime.
I can see the default settings at design-time:
I can add entries to the Bindings and set the DefaultPort.
But, I want to do this while the program is running. I want to set the bindings and port in my UI and push a button to make the server use what I entered.
How do I do this?
The Bindings is a collection of TIdSocketHandle objects. Adding a new entry to the collection at design-time is the same as calling the Bindings.Add() method at runtime.
TIdSocketHandle has IP and Port properties. When a TIdSocketHandle object is created, its Port is initialized with the current value of the DefaultPort.
To do what you are asking, simply call Bindings.Add() and set the new object's IP and Port properties. For example:
Delphi:
procedure TMyForm.ConnectButtonClick(Sender: TObject);
var
LIP: string;
LPort: TIdPort;
LBinding: TIdSocketHandle;
begin
LIP := ServerIPEdit.Text;
LPort := IntToStr(ServerPortEdit.Text);
IdTCPServer1.Active := False;
IdTCPServer1.Bindings.Clear;
LBinding := IdTCPServer1.Bindings.Add;
LBinding.IP := LIP;
LBinding.Port := LPort;
IdTCPServer1.Active := True;
end;
C++:
void __fastcall TMyForm::ConnectButtonClick(TObject *Sender);
{
String LIP = ServerIPEdit->Text;
TIdPort LPort = IntToStr(ServerPortEdit->Text);
IdTCPServer1->Active = false;
IdTCPServer1->Bindings->Clear();
TIdSocketHandle *LBinding = IdTCPServer1->Bindings->Add();
LBinding->IP = LIP;
LBinding->Port = LPort;
IdTCPServer1->Active = true;
}
Same thing with TIdUDPServer.
I have read the Akka Java documentation about Multi Node Testing, however all codes are in Scala. Is there any reason for that? Google search was unsuccessful as well.
EDIT:
To reduce the tumbleweedness of this question, I did try :). A simple translation to Java of the existing Scala codes migth look like this:
public class ClusterTest {
protected RoleName first;
#Test
public void SimpleClusterListenerClusterJoinTest() throws Exception {
new MultiNodeSpec(new MultiNodeConfig() {{
first = this.role("first");
second = this.role("second");
third = this.role("third");
this.commonConfig(ConfigFactory.parseString(
"akka.crdt.convergent.leveldb.destroy-on-shutdown = on\n" +
"akka.actor.provider = akka.cluster.ClusterActorRefProvider\n" +
"akka.cluster.auto-join = off\n" +
"akka.cluster.auto-down = on\n" +
"akka.loggers = [\"akka.testkit.TestEventListener\"]\n" +
"akka.loglevel = INFO\n" +
"akka.remote.log-remote-lifecycle-events = off")); }}) {
{
Address firstAddress = node(first).address();
#SuppressWarnings("serial")
ArrayList<RoleName> firstnode = new ArrayList<RoleName>() {{
add(first);
}};
Seq<RoleName> fisrtnodeseq = (Seq<RoleName>)JavaConversions.asScalaBuffer(firstnode).toList();
runOn(fisrtnodeseq, null);
Cluster cluster = new Cluster((ExtendedActorSystem) system());
cluster.join(firstAddress);
// verify that single node becomes member
cluster.subscribe(testActor(), MemberEvent.class);
expectMsg(MemberUp.class);
}
#Override
public int initialParticipants() {
return roles().size();
}};
}
}
HOWEVER During the run with the arguments:
-Dmultinode.max-nodes=4 -Dmultinode.host=127.0.0.1 etc. according to Multi Node Testing (if I list here all of the arguments the editor heavily complains :[ ) I will get the following error:
java.lang.IllegalArgumentException: invalid ActorSystem name [ClusterTest_2], must contain only word characters (i.e. [a-zA-Z0-9] plus non-leading '-')
at akka.actor.ActorSystemImpl.<init>(ActorSystem.scala:497)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:141)
at akka.actor.ActorSystem$.apply(ActorSystem.scala:118)
at akka.remote.testkit.MultiNodeSpec.<init>(MultiNodeSpec.scala:252)
at com.akkamint.demo.ClusterTest$2.<init>(ClusterTest.java:51)
is the internally generated ActorSystem name wrong?
Besides this I have two questions:
How can access the gossips from Java as in the Scala code,
awaitCond(Cluster(system).latestGossip.members.exists(m ⇒ m.address == firstAddress && m.status == Up))
and I have not found any way to implement the same in Java. My workaround is to subscribe to member events (see above), otherwise I do not know, is this effectively the same or not?
Thunk function (the second argument of runOn method)? What is that? How can use it?
Trying to get working ScalikeJDBC and SQLite. Have a simple code based on provided examples:
import scalikejdbc._, SQLInterpolation._
object Test extends App {
Class.forName("org.sqlite.JDBC")
ConnectionPool.singleton("jdbc:sqlite:test.db", null, null)
implicit val session = AutoSession
println(sql"""SELECT * FROM kv WHERE key == 'seq' LIMIT 1""".map(identity).single().apply()))
}
It fails with exception:
Exception in thread "main" java.sql.SQLException: Cannot change read-only flag after establishing a connection. Use SQLiteConfig#setReadOnly and QLiteConfig.createConnection().
at org.sqlite.SQLiteConnection.setReadOnly(SQLiteConnection.java:447)
at org.apache.commons.dbcp.DelegatingConnection.setReadOnly(DelegatingConnection.java:377)
at org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.setReadOnly(PoolingDataSource.java:338)
at scalikejdbc.DBConnection$class.readOnlySession(DB.scala:138)
at scalikejdbc.DB.readOnlySession(DB.scala:498)
...
I've tried both scalikejdbc 1.7 and 2.0, error remains. As sqlite driver I use "org.xerial" % "sqlite-jdbc" % "3.7.+".
What can I do to fix the error?
The following will create two separate connections, one for read-only operations and the other for writes.
ConnectionPool.add("mydb", s"jdbc:sqlite:${db.getAbsolutePath}", "", "")
ConnectionPool.add(
"mydb_ro", {
val conf = new SQLiteConfig()
conf.setReadOnly(true)
val source = new SQLiteDataSource(conf)
source.setUrl(s"jdbc:sqlite:${db.getAbsolutePath}")
new DataSourceConnectionPool(source)
}
)
I found that the reason is that you're using "org.xerial" % "sqlite-jdbc" % "3.7.15-M1". This version looks still unstable.
Use "3.7.2" as same as #kawty.
Building on #Synesso's answer, I expanded slightly to be able to get config value from config files and to set connection settings:
import scalikejdbc._
import scalikejdbc.config.TypesafeConfigReader
case class SqlLiteDataSourceConnectionPool(source: DataSource,
override val settings: ConnectionPoolSettings)
extends DataSourceConnectionPool(source)
// read settings for 'default' database
val cpSettings = TypesafeConfigReader.readConnectionPoolSettings()
val JDBCSettings(url, user, password, driver) = TypesafeConfigReader.readJDBCSettings()
// use those to create two connection pools
ConnectionPool.add("db", url, user, password, cpSettings)
ConnectionPool.add(
"db_ro", {
val conf = new SQLiteConfig()
conf.setReadOnly(true)
val source = new SQLiteDataSource(conf)
source.setUrl(url)
SqlLiteDataSourceConnectionPool(source, cpSettings)
}
)
// example using 'NamedDB'
val name: Option[String] = NamedDB("db_ro") readOnly { implicit session =>
sql"select name from users where id = $id".map(rs => rs.string("name")).single.apply()
}
This worked for me with org.xerial/sqlite-jdbc 3.28.0:
String path = ...
SQLiteConfig config = new SQLiteConfig();
config.setReadOnly(true);
return DriverManager.getConnection("jdbc:sqlite:" + path, config.toProperties());
Interestingly, I wrote a different solution on the issue on the xerial repo:
PoolProperties props = new PoolProperties();
props.setDriverClassName("org.sqlite.JDBC");
props.setUrl("jdbc:sqlite:...");
Properties extraProps = new Properties();
extraProps.setProperty("open_mode", SQLiteOpenMode.READONLY.flag + "");
props.setDbProperties(extraProps);
// This line can be left in or removed; it no longer causes a problem
// as long as the open_mode code is present.
props.setDefaultReadOnly(true);
return new DataSource(props);
I don't recall why I needed the second, and was then able to simplify it back to the first one. But if the first doesn't work, you might try the second. It uses a SQLite-specific open_mode flag that then makes it safe (but unnecessary) to use the setDefaultReadOnly call.
I am in the processing of implementing a CNG ECDH and then I am trying to use the BCRYPT_KDF_SP80056A_CONCAT KDF to derive a symmetric AES256 key (BCryptDeriveKey()). I am having a problem (i always get back 0xc000000d status returned.)
i have generated a shared secret successfully and I have created the buffer desc "BCryptBufferDesc" which has an array of "BCryptBuffer" with 1 AlgorithmID, 1 PartyU and 1 PartyV "other info". I think I have the structures all defined and populated properly. I am just picking some "values" for PartyU and PartyV bytes (i tried 1 byte and 16 bytes for each but i get the same result). NIST documentation gives no details about what the other info should be..
i have followed the Microsoft web site for creating these structures, using their strings, defines, etc. I tried with the standard L"HASH" kdf and it works and i get the same derived key on both "sides", but with the concatenation KDF i always get the same 0xC000000D status back..
Has anybody else been able to successfully use BCRYPT_KDF_SP80056A_CONCAT CNG KDF? If you did, do you have any hints?
This worked for me:
ULONG derivedKeySize = 32;
BCryptBufferDesc params;
params.ulVersion = BCRYPTBUFFER_VERSION;
params.cBuffers = 3;
params.pBuffers = new BCryptBuffer[params.cBuffers];
params.pBuffers[0].cbBuffer = 0;
params.pBuffers[0].BufferType = KDF_ALGORITHMID;
params.pBuffers[0].pvBuffer = new byte[0];
params.pBuffers[1].cbBuffer = 0;
params.pBuffers[1].BufferType = KDF_PARTYUINFO;
params.pBuffers[1].pvBuffer = new byte[0];
params.pBuffers[2].cbBuffer = 0;
params.pBuffers[2].BufferType = KDF_PARTYVINFO;
params.pBuffers[2].pvBuffer = new byte[0];
NTSTATUS rv = BCryptDeriveKey(secretHandle, L"SP800_56A_CONCAT", ¶ms, NULL, 0, &derivedKeySize, 0);
if (rv != 0){/*fail*/}
UCHAR derivedKey = new UCHAR[derivedKeySize];
rv = BCryptDeriveKey(secretHandle, L"SP800_56A_CONCAT", ¶ms, derivedKey, derivedKeySize, &derivedKeySize, 0);
if (rv != 0){/*fail*/}