How to set cpu affinity for thread being used by spdlog when using spdlog for logging in async mode? - cpu-usage

I am using spdlog for logging in async mode. I want to assign the task of logging to just one cpu core. Is there an API to achieve this in spdlog?

For now, I have a workaround to set affinity while creating thread pool in library file threadpool-inl.h
SPDLOG_INLINE thread_pool::thread_pool(size_t q_max_items, size_t threads_n, std::function<void()> on_thread_start)
: q_(q_max_items)
{
//printf("number of threads %lu", threads_n);
if (threads_n == 0 || threads_n > 1000)
{
throw_spdlog_ex("spdlog::thread_pool(): invalid threads_n param (valid "
"range is 1-1000)");
}
for (size_t i = 0; i < threads_n; i++)
{
threads_.emplace_back([this, on_thread_start] {
on_thread_start();
this->thread_pool::worker_loop_();
});
// Create a cpu_set_t object representing a set of CPUs. Clear it and mark only CPU 2 as set.
cpu_set_t cpuset;
CPU_ZERO(&cpuset);
CPU_SET(2, &cpuset);
int rc = pthread_setaffinity_np(threads_[i].native_handle(), sizeof(cpu_set_t), &cpuset);
if (rc != 0) {
printf( "Error calling pthread_setaffinity_np: %d \n", rc);
}
}
}

Related

How to create a single SQPOLL thread in io_uring for multiple rings (IORING_SETUP_SQPOLL)

Is it possible to create a single SQPOLL (iou-sqp) thread that polls submit requests of multiple io_uring rings?
This questions comes from the desire to use multiple io_uring rings without making syscalls (entering kernel) when submitting I/O requests. In order to achieve this in case of a single ring, one creates an SQPOLL thread by passing the IORING_SETUP_SQPOLL flag to io_uring_setup() call. However, if multiple rings are created this way, multiple SQPOLL threads also get created (one thread for each ring). As a result we end up having several SQPOLL threads each busy polling a respective submit queue. Having a single SQPOLL thread would save CPU usage and in most of the cases would be enough to sustain the I/O load.
I tried to create a global uring file descriptor
static int RingFd;
static struct io_uring_params RingParams;
// ...
memset(&RingParams, 0, sizeof(RingParams));
RingParams.flags |= IORING_SETUP_SQPOLL;
RingParams.sq_thread_idle = 100;
RingFd = io_uring_setup(maxEvents, &RingParams);
if (RingFd < 0) {
// ...
}
// ...
and mmap it to each uring
struct io_uring Ring;
int ret = io_uring_queue_mmap(RingFd, &RingParams, &Ring);
if (ret < 0) {
// ...
}
// ...
but it doesn't work.
You can do this by using the IORING_ATTACH_WQ flag in combination with IORING_SETUP_SQPOLL. See the test case sq-poll-share in the liburing repo:
https://github.com/axboe/liburing/blob/7ad5e52d4d2f91203615cd738e56aba10ad8b8f6/test/sq-poll-share.c
See also this conversation:
https://github.com/axboe/liburing/issues/324
Relevants bits:
for (i = 0; i < NR_RINGS; i++) {
struct io_uring_params p = { };
p.flags = IORING_SETUP_SQPOLL;
if (i) {
p.wq_fd = rings[0].ring_fd;
p.flags |= IORING_SETUP_ATTACH_WQ;
}
ret = io_uring_queue_init_params(BUFFERS, &rings[i], &p);
if (ret) {
fprintf(stderr, "queue_init: %d/%d\n", ret, i);
goto err;
}
/* no sharing for non-fixed either */
if (!(p.features & IORING_FEAT_SQPOLL_NONFIXED)) {
fprintf(stdout, "No SQPOLL sharing, skipping\n");
return 0;
}
}

How to implement a TCP server that receives and sends data in two processes?

I am trying to implement a TCP server using C in Linux. I want this server to accept incoming data forever from multiple clients and at the same time send some data back to each connected client in every 3 seconds.
My problem is I don't know how to properly do send() in a different process than the one handling the client.
What I am doing is at the beginning of the program do a fork() and do
while (1) {
sleep(3);
// compute and `send()` data to each connected peers
}
in child process, do
sock = create_socket();
while (1) {
client_sock = accept_connection(sock);
if (fork() == 0) {
close(sock);
handle_client(client_sock);
exit(0);
}
close(client_sock);
// clean up zombies
}
in parent process. handle_client() is simply recv() data in a infinite loop. Because send() and recv() are executed in different processes, I couldn't use the socket file descriptors to send() in parent process. What do I need to do in the parent process to do the send()?
You have three levels of processes, a parent, a child, and many grandchildren. Get rid of these levels, and do not fork at all; instead using an event-driven model in a single process.
In rough pseudo-code (translate to your preferred language):
listening_fd = create_socket();
EventQueueOfSomeKind q; // kqueue()-style
q.add_or_update_event(listening_fd, EVFILT_READ, EV_ENABLE);
q.add_or_update_event(3, EVFILT_TIMER, EV_ENABLE, NOTE_SECONDS);
FDToContextMapOfSomeKind context_map;
EventVector event_vector; // vector of kevent-like things
while (1) {
q.wait_for_events(&event_vector); // kevent()-style
foreach e <- event_vector {
switch (e.type) {
case EVFILT_READ:
if (listening_fd == e.fd) {
client_sock = accept_connection(e.fd, SOCK_NONBLOCK);
q.add_or_update_event(client_sock, EVFILT_READ, EV_ENABLE);
q.add_or_update_event(client_sock, EVFILT_WRITE, EV_DISABLE);
context_map.add_new_context(client_socket);
} else {
// Must be one of the client sockets
if (e.flags & EV_EOF) {
context_map.remove_context(e.fd);
q.remove_event(e.fd, EVFILT_READ);
q.remove_event(e.fd, EVFILT_WRITE);
close(e.fd);
} else {
recv(e.fd, buffer);
handle_client_input(&context_map[e.fd], buffer);
}
}
break;
case EVFILT_WRITE:
if (has_queued_output(context_map[e.fd])) {
send(e.fd, pull_queued_output(&context_map[e.fd]));
} else {
q.add_or_update_event(client_sock, EVFILT_WRITE, EV_DISABLE);
}
break;
case EVFILT_TIMER:
foreach client_sock,context <- context_map {
push_queued_output(&context, computed_data(context));
q.add_or_update_event(client_sock, EVFILT_WRITE, EV_ENABLE);
}
break;
}
}
}
I have glossed over partial send()s and recv()s, write-side shutdown, and all error handling but this is the general idea.
Further reading
https://github.com/mheily/libkqueue
Jonathan Lemon. kqueue. OpenBSD System Calls Manual.
Jonathan Lemon. kqueue. Darwin BSD Calls Manual. Apple corporation.
This is a solution using Linux epoll and timerfd (error handling is omitted):
int start_timer(unsigned int interval) {
int tfd;
struct itimerspec tspec;
tspec.it_value.tv_sec = 1;
tspec.it_value.tv_nsec = 0;
tspec.it_interval.tv_sec = 3;
tspec.it_interval.tv_nsec = 0;
tfd = timerfd_create(CLOCK_MONOTONIC, 0);
timerfd_settime(tfd, TFD_TIMER_ABSTIME, &tspec, NULL);
return tfd;
}
void epset_add(int epfd, int fd, uint32_t events)
{
struct epoll_event ev;
ev.data.fd = fd;
ev.events = events;
epoll_ctl(epfd, EPOLL_CTL_ADD, fd, &ev);
}
int main()
{
int epfd, tfd, sock, nfds, i;
struct epoll_event events[MAX_EVENTS];
/* create new epoll instance */
epfd = epoll_create1(0);
tfd = start_timer(TIMER_INTERVAL);
/* socket(), bind() and listen() omitted in create_socket() */
sock = create_socket(PORT_NUMBER);
/* add sock and tfd to epoll set */
epset_add(epfd, tfd, EPOLLIN);
epset_add(epfd, sock, EPOLLIN | EPOLLET);
for (;;) {
for (i = 0; i < nfds; ++i) {
if (events[i].data.fd == tfd) {
/* handle timer notification, it's run
periodically with interval TIMER_INTERVAL */
} else if (events[i].data.fd == sock) {
/* accept() incoming connections,
set non-blocking,
and add new connection sockets to epoll set */
} else {
/* recv() from connection sockets and handle */
}
}
}
}
This program was helpful https://github.com/eklitzke/epollet/blob/master/poll.c and I added timerfd to the epoll set so the server keeps listening and receiving data and at the same time can send data to the clients periodically.

decrypt function at run time and use it QT c++

I'm new to QT and I'm trying to create an encrypted function.
Overall what you do in C / C ++ is:
Take pointer to function
make the function page rwx
Encrypt it (for the example I encrypt and decrypt in the same program)
Decrypt it and run it
A simple code in C will happen roughly like this:
void TestFunction()
{
printf("\nmsgbox test encrypted func\n");
}
// use this as a end label
void FunctionStub() { return; }
void XorBlock(DWORD dwStartAddress, DWORD dwSize)
{
char * addr = (char *)dwStartAddress;
for (int i = 0; i< dwSize; i++)
{
addr[i] ^= 0xff;
}
}
DWORD GetFuncSize(DWORD* Function, DWORD* StubFunction)
{
DWORD dwFunctionSize = 0, dwOldProtect;
DWORD *fnA = NULL, *fnB = NULL;
fnA = (DWORD *)Function;
fnB = (DWORD *)StubFunction;
dwFunctionSize = (fnB - fnA);
VirtualProtect(fnA, dwFunctionSize, PAGE_EXECUTE_READWRITE, &dwOldProtect); // make function page read write execute permission
return dwFunctionSize;
}
int main()
{
DWORD dwFuncSize = GetFuncSize((DWORD*)&TestFunction, (DWORD*)&FunctionStub);
printf("use func");
TestFunction();
XorBlock((DWORD)&TestFunction, dwFuncSize); // XOR encrypt the function
printf("after enc");
//TestFunction(); // If you try to run the encrypted function you will get Access Violation Exception.
XorBlock((DWORD)&TestFunction, dwFuncSize); // XOR decrypt the function
printf("after\n");
TestFunction(); // Fine here
getchar();
}
When I try to run such an example in QT I get a run time error.
Here is the code in QT:
void TestFunction()
{
QMessageBox::information(0, "Test", "msgbox test encrypted func");
}
void FunctionStub() { return; }
void XorBlock(DWORD dwStartAddress, DWORD dwSize)
{
char * addr = (char *)dwStartAddress;
for (int i = 0; i< dwSize; i++)
{
addr[i] ^= 0xff; // here i get seg. fault
}
}
DWORD GetFuncSize(DWORD* Function, DWORD* StubFunction)
{
DWORD dwFunctionSize = 0, dwOldProtect;
DWORD *fnA = NULL, *fnB = NULL;
fnA = (DWORD *)Function;
fnB = (DWORD *)StubFunction;
dwFunctionSize = (fnB - fnA);
VirtualProtect(fnA, dwFunctionSize, PAGE_EXECUTE_READWRITE, &dwOldProtect); // Need to modify our privileges to the memory
QMessageBox::information(0, "Test", "change func to read write execute ");
return dwFunctionSize;
}
void check_enc_function()
{
DWORD dwFuncSize = GetFuncSize((DWORD*)&TestFunction, (DWORD*)&FunctionStub);
QMessageBox::information(0, "Test", "use func");
TestFunction();
XorBlock((DWORD)&TestFunction, dwFuncSize); // XOR encrypt the function -> ### i get seg fault in here ###
QMessageBox::information(0, "Test", "after enc");
TestFunction(); // If you try to run the encrypted function you will get Access Violation Exception.
XorBlock((DWORD)&TestFunction, dwFuncSize); // XOR decrypt the function
QMessageBox::information(0, "Test", "after dec");
TestFunction(); // Fine here
getchar();
}
Why should this happen?
QT is supposed to behave like precision as standard C ++ ...
post Scriptum.
Interestingly in the same matter, what is the most legitimate way to keep an important function encrypted (the reason it is encrypted is DRM)?
Legitimately I mean that anti-viruses will not mistakenly mark me as a virus because I defend myself.
PS2
If I pass an encrypted function over the network (say, I will build a server client schema that the client asks for the function it needs to run from the server and the server sends it to it if it is approved) How can I arrange the symbols so that the function does not collapse?
PS3
How in QT can I turn off the DEP and ASLR defenses? (In my opinion so that I can execute PS 2. I have to cancel them)
Thanks
yoko
The example is undefined behaviour on my system.
The first and main issue in your code is:
void TestFunction() { /* ... */ }
void FunctionStub() { return; }
You assume that the compiler will put FunctionStub after TestFunction without any padding. I compiled your example and FunctionStub in my case was above TestFunction which resulted in a negative dwFunctionSize.
dwFunctionSize = (fnB - fnA);
TestFunction located at # 0xa11d90
FunctionStub located at # 0xa11b50
dwFunctionSize = -0x240
Also in XorBlock
addr[i] ^= 0xff;
Is doing nothing.
I assume you want to write in XorBlock to the memory location to XOR the entire TestFunction.
You could do something like this:
void XorBlock(DWORD dwStartAddress, DWORD dwSize)
{
DWORD dwEndAddress = dwStartAddress + dwSize;
for(DWORD i = dwStartAddress; i < dwEndAddress; i++) {
// ...
}
}
I can't see any Qt-specific in your example. Even if it's Qt function call it's just a call. So I guess you have undefined behaviour in both examples but only second one crashes.
I can't see any reason for compiler and linker to keep function order. For example GCC let you specify the code section for each function. So you can reorder it in executable without reordering in cpp.
I think you need some compiler specific things to make it work.

Android BLE: writing >20 bytes characteristics missing the last byte array

I have been implementing the module to send the bytes in chunks, 20 bytes each onto the MCU device via BLE. When it comes to writing the bytes more than 60 bytes and so on, the last chunk of the bytes ( usually less than 20 bytes) is often missed. Hence, the MCU device cannot get the checksum and write the value. I have modified the call back to Thread.sleep(200) to change it but it sometimes works on writing 61 bytes or sometimes not. Would you please tell me are there any synchronous method to write the bytes in chunks ? The below is my working :
#Override
public void onCharacteristicWrite(BluetoothGatt gatt,
BluetoothGattCharacteristic characteristic, int status) {
try {
Thread.sleep(300);
if (status != BluetoothGatt.GATT_SUCCESS) {
disconnect();
return;
}
if(status == BluetoothGatt.GATT_SUCCESS) {
System.out.println("ok");
broadcastUpdate(ACTION_DATA_READ, mReadCharacteristic, status);
}
else {
System.out.println("fail");
broadcastUpdate(ACTION_DATA_WRITE, characteristic, status);
}
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
public synchronized boolean writeCharacteristicData(BluetoothGattCharacteristic characteristic ,
byte [] byteResult ) {
if (mBluetoothAdapter == null || mBluetoothGatt == null) {
return false;
}
boolean status = false;
characteristic.setValue(byteResult);
characteristic.setWriteType(BluetoothGattCharacteristic.WRITE_TYPE_NO_RESPONSE);
status = mBluetoothGatt.writeCharacteristic(characteristic);
return status;
}
private void sendCommandData(final byte [] commandByte) {
// TODO Auto-generated method stub
if(commandByte.length > 20 ){
final List<byte[]> bytestobeSent = splitInChunks(commandByte);
for(int i = 0 ; i < bytestobeSent.size() ; i ++){
for(int k = 0 ; k < bytestobeSent.get(i).length ; k++){
System.out.println("LumChar bytes : "+ bytestobeSent.get(i)[k] );
}
BluetoothGattService LumService = mBluetoothGatt.getService(A_SERVICE);
if (LumService == null) { return; }
BluetoothGattCharacteristic LumChar = LumService.getCharacteristic(AW_CHARACTERISTIC);
if (LumChar == null) { System.out.println("LumChar"); return; }
//Thread.sleep(500);
writeCharacteristicData(LumChar , bytestobeSent.get(i));
}
}else{
....
You need to wait for the onCharacteristicWrite() callback to be invoked before sending the next write. The typical solution is to make a job queue and pop a job off the queue for each callback you get to onCharacteristicWrite(), onCharacteristicRead(), etc.
In other words, you can't do it in a for loop unfortunately, unless you want to set up some kind of lock that waits for the callback before going on to the next iteration. In my experience a job queue is a cleaner general-purpose solution though.

loadjava seems work but the query doesn't work on oracle sql developer

I am trying to load a java class to oracle as a function. In the server I managed to use loadjava as below:
C:\Users\n12017>loadjava -user USER1/passw E:\JAVA_repository\SOOSProjects\Mehmet_java_2db_trial\classes\mehmet_java_2db_trial\kondrakk.class
And in the oracle db side:
create or replace function ngram_kondrakk(src in varchar2, trg in varchar2)
return float
as language java
name 'mehmet_java_2db_trial/kondrakk.getDistance(java.lang.string, java.lang.string) return java.lang.float';
/
However, when I apply the query as below, I got error. (As a result of the query I am expecting a similarity score of 1 since two identical strings are compared)
select ngram_kondrakk('mehmet','mehmet') from dual;
Here is the error:
ORA-29532: Java call terminated by uncaught Java exception: System error : java/lang/UnsupportedClassVersionError
29532. 00000 - "Java call terminated by uncaught Java exception: %s"
*Cause: A Java exception or error was signaled and could not be
resolved by the Java code.
*Action: Modify Java code, if this behavior is not intended.
Finally, here is the code that I am trying to use:
package mehmet_java_2db_trial;
public class kondrakk {
public static float getDistance(String source, String target) {
final int sl = source.length();
final int tl = target.length();
if (sl == 0 || tl == 0) {
if (sl == tl) {
return 1;
}
else {
return 0;
}
}
int n=3;
int cost = 0;
if (sl < n || tl < n) {
for (int i=0,ni=Math.min(sl,tl);i<ni;i++) {
if (source.charAt(i) == target.charAt(i)) {
cost++;
}
}
return (float) cost/Math.max(sl, tl);
}
char[] sa = new char[sl+n-1];
float p[]; //'previous' cost array, horizontally
float d[]; // cost array, horizontally
float _d[]; //placeholder to assist in swapping p and d
//construct sa with prefix
for (int i=0;i<sa.length;i++) {
if (i < n-1) {
sa[i]=0; //add prefix
}
else {
sa[i] = source.charAt(i-n+1);
}
}
p = new float[sl+1];
d = new float[sl+1];
// indexes into strings s and t
int i; // iterates through source
int j; // iterates through target
char[] t_j = new char[n]; // jth n-gram of t
for (i = 0; i<=sl; i++) {
p[i] = i;
}
for (j = 1; j<=tl; j++) {
//construct t_j n-gram
if (j < n) {
for (int ti=0;ti<n-j;ti++) {
t_j[ti]=0; //add prefix
}
for (int ti=n-j;ti<n;ti++) {
t_j[ti]=target.charAt(ti-(n-j));
}
}
else {
t_j = target.substring(j-n, j).toCharArray();
}
d[0] = j;
for (i=1; i<=sl; i++) {
cost = 0;
int tn=n;
//compare sa to t_j
for (int ni=0;ni<n;ni++) {
if (sa[i-1+ni] != t_j[ni]) {
cost++;
}
else if (sa[i-1+ni] == 0) { //discount matches on prefix
tn--;
}
}
float ec = (float) cost/tn;
// minimum of cell to the left+1, to the top+1, diagonally left and up +cost
d[i] = Math.min(Math.min(d[i-1]+1, p[i]+1), p[i-1]+ec);
}
// copy current distance counts to 'previous row' distance counts
_d = p;
p = d;
d = _d;
}
// our last action in the above loop was to switch d and p, so p now
// actually has the most recent cost counts
System.out.println(1.0f - (p[sl] / Math.max(tl, sl)));
return 1.0f - (p[sl] / Math.max(tl, sl));
}
}
Please HELP!
Thanks in advance...
When compiling Java classes to load into an Oracle database, be sure that you compile your Java classes to run with the JVM inside the Oracle database.
The version of Java that comes with an Oracle database is typically out of date compared to the current Java: expect to be using Java 1.4 with Oracle 10g and 1.5 with 11g. Your best bet is to use the Java compiler that ships with the database, but if you can't do that, use -target 1.5 or suchlike to force the compiler to compile classes to run on Java 1.5.
Your plsql wrapper function has "/"
...mehmet_java_2db_trial/kondrakk.getDistance...
replace / with . [dot]
Check the docs
and as it was already mentioned - sync JVM of compilation with JVM for run-time( which will be Oracle JVM that is "attached" to DB)

Resources