How to make Kaa log upload event based instead of time based - kaa

I've only recently started to work with KaaIoT and I am wondering if there is another way to store a log bucked to the server.
/* some headers */
static void main_callback(void *context)
{
kaa_user_log_record_t *log_record = kaa_logging_time_collection_create();
log_record->test_time = kaa_string_copy_create("some_time");
kaa_logging_add_record(kaa_client_get_context(context)->log_collector, log_record, NULL);
}
/* some other configuration */
error = kaa_client_start(kaa_client, main_callback, kaa_client, 5);
When I execute this code, the string "some_time" will be stored to the server every 5 seconds.
I was wondering if there was an other way to do this, like upload the log to the server when I press my 'enter' key? But I can't seem to find a command for this.

To my understanding kaa_logging_add_record, just add the record to the storing bucket waiting to be sent according to the logging strategy you have defined. (https://kaaproject.github.io/kaa/autogen-docs/client-c/v0.10.0/kaa__logging_8h.html#af0fadc09a50f5e38603271a08c581417) . The parameter 5 sec in kaa_client_start is only a delay to cycle the call back function. If you want to register an event, first you have to store it in the log bucket and the timestamp if you want to record at what time happened. If you want to notify at the moment, the I think you should use Notifications or Events. I am also scratching my head in something similar and I wonder if there is a better way.

Related

How to create a liquidity pool on Raydium on Solana devnet?

Can anyone give me any advice on how to create an LP pool on the Solana devnet?
I planned this job for testing swaps between specific two tokens on the devnet using the Raydium protocol.
So, I need to create a swap pool on the devnet first.
To achieve this, I did it like below.
First of all, to list on the serum market, I cloned the Raydium-Dex repository on my mac and changed the Serum dex's program id from the mainnet to the devnet, and I success registered on the devnet serum. (Custom token with SOL pairs)
As a result, I got a serum market program id.
After that, I cloned the Raydium-frontend repository to create a liquidity pool. And modified wellknownToken.config.ts so that my custom tokens could be possible to create a new pool.
Finally, I could access the create pool UI from the localhost web UI.
I clicked Initialize Liquidity Pool button on the UI and got a Toast message Create a new pool Transaction Sent apparently.
However, It is not worked well. Because I can not find the transaction hash on the Solscan website.
I tracked the button's click event codes and I figured out one thing.
One of the result value of Liquidity.makeCreatePoolTransaction function has a null value, especially, feePayer.
const { transaction: sdkTransaction1, signers: sdkSigners1 } = Liquidity.makeCreatePoolTransaction({
poolKeys: sdkAssociatedPoolKeys,
userKeys: { payer: owner }
})
const testTx = await loadTransaction({ transaction: sdkTransaction1, signers: sdkSigners1 })
console.log('feepayer', testTx.feePayer?.toBase58()) // null
I felt this is not the preferred (good) way to create a swap pool on the Solana devnet, but I can not find a better way to achieve this task.
What am I missing? or What am I should read or learn?
please give me some advice on how to do it to make it works.
Thanks.
It looks like the transaction created with Liquidity.makeCreatePoolTransaction hasn't been sent to the network, so it doesn't exist anywhere. Be sure to send and confirm the transaction before trying to load it. You can use something like:
const { transaction: sdkTransaction1, signers: sdkSigners1 } = Liquidity.makeCreatePoolTransaction({
poolKeys: sdkAssociatedPoolKeys,
userKeys: { payer: owner }
});
await sendAndConfirmTransaction(connection, transaction, [wallet]);
Note that this requires a connection to send and a wallet to sign. More info at https://github.com/solana-labs/solana/blob/9ac2245970de90af30bff9d1f7f35cd2d8f2bf6d/web3.js/src/util/send-and-confirm-transaction.ts#L18
You might run into other issues though, because the Raydium program isn't deployed to devnet: https://explorer.solana.com/address/675kPX9MHTjS2zt1qfr1NYHuzeLXfQM9H24wFSUt1Mp8?cluster=devnet

How to read documents from Change Feed in Azure Cosmos DB since last checkpoint after restart?

I am using Change Feed processor library to read the Change Feed on a partitioned collection and below is the code for how I have configure it. I ma using most of the default options.
ChangeFeedProcessorOptions feedProcessorOptions = new
{
LeaseRenewInterval = TimeSpan.FromSeconds(15),
};
var docObserverFactory = DocumentFeedObserverFactory.Create(this.destinationCollectionInfo, this.dbRepository);
this.builder
.WithHostName(hostName)
.WithFeedCollection(this.monitoredCollectionInfo)
.WithLeaseCollection(this.leaseCollectionInfo)
.WithProcessorOptions(feedProcessorOptions)
.WithObserverFactory(docObserverFactory);
This runs fine as long as the Change Feed application is running and documents are being inserted/updated in the collection and the Change Feed app picks them up as expected.
The problem happens when I stop the Change Feed app for sometime and insert/update few documents in the Collection. Then when I start the Change Feed app, it doesn't pick changes from where it last left. Those changes that were inserted when the Change Feed app was stopped are lost. But when I set the flag StartFromBeginning to true, it picks everything from the start including changes that were inserted when the Change Feed app was stopped in between for sometime.
My understanding of read from current (StartFromBeginning to false) is that the Change Feed reads documents since it last left. But that doesn't seem to happen. Please help.
There are two ways to continue from exactly where you left it.
The first, and more accurate one, is to store the Continuation token of the last thing you read. That way you can specify it when you start again and it will win over both the StartTime and the StartFromBeginning flags.
The second one is to provide the StartTime property which will try and find the continuation token of a given time automatically. It has an approximate 5 second precision so there is a chance that you might miss some documents though.

How to create an alert to notify an user when some amount % of threshold reached DailyAsyncApex Executions

On 2 occasions in the past month, we have managed to hit our daily limit on asynchronous apex executions. Salesforce temporarily increased our limit to 425000 but it will be scaled down to 250000 in a week's time. Once we reach the limit, a lot of the SF functions will fail and this has tremendously impacted both internal staff and external customers.
So to prevent this from happening in the future, we need to create some kind of alert in Salesforce to monitor our daily asynchronous apex method executions. Our maximum daily limit is 250000. The alert will need to create a P3 helpdesk ticket and notify couple of users say USER A and USER B once it reaches 70% threshold.
Kindly advise what is possible to achieve the same
Thanks & Regards,
Harjeet
There's a promising Limits method but it doesn't seem to work currently ("reserved for future use"): System.debug(Limits.getAsyncCalls() + ' / ' + Limits.getLimitAsyncCalls());
There's an idea you can upvote: https://success.salesforce.com/ideaView?id=0873A0000003VIFQA2 ;)
You could query SELECT COUNT() FROM AsyncApexJob WHERE ... but that sounds like a bad idea ;)
I think your best course of action is to use SF REST API. There's a "limits" resource you can fetch. You could do it from SF itself (bad idea because if you'd schedule it to run every hour then well, of course it will contribute to the limit consumption too ;)) or from some external app that'd connect to your SF...
https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_limits.htm
You can quickly try it out for example in workbench.developerforce.com before you decide you do want to deep dive into coding it.
Of course if you have control over your batch jobs, queuable, schedulable & #future calls you could implement some rough counter of executions in a helper object for example... won't help you much if most of the jobs are coming from managed packages though...
Got 1 more idea but it's pretty hardcore - you should be able to make a REST API call from javascript. so you could create a simple VF page (even without any apex controller), put JS callout on it, have it check every 5 mins and do something if threshold is hit... But that means IT person would have to have this page open all the time (perhaps as a home page component)... Messy :)
I was having the exact same issue so I created a simple JsForce script in NodeJS to monitor the call to the /limits endpoint.
You can connect a Free Monitoring service like UpTimerobot.com or PingDom.com and get an email when you find the Word "Warning" >50% or "Error" > 80%.
async function getSfLimits() {
try {
//Let's login into salesforce
const login = await conn.login(SF_USERNAME, SF_PASSWORD+SF_SECURITY_TOKEN);
//Call the API
const sfLimits = await conn.requestGet('/services/data/v51.0/limits');
return sfLimits;
} catch(err) {
console.log(err);
}
}
https://github.com/carlosdevia/salesforcelimits

How to force server-side insert on Meteor's collection.insert

I've got a simple CSV file with 40,000 rows which I'm processing browser-side with papa-parse.
I'm trying to insert them one-by-one into a collection using the techniques in Discover Meteor and other 101 posts I find when Googling.
40000 insert browser-side pretty quickly - but when I check mongo server side it's only got 387 records.
Eventually (usually after 20 seconds or so) it starts to insert server-side.
But if I close or interrupt the browser, the already-inserted records disappear obviously.
How do I force inserts to go server-side, or at least monitor so I know when to notify the user of success?
I tried Tracker.flush() no difference.
I'd go server-side inserts in a Meteor.method, but all the server-side CSV libraries are more complex to operate than client-side (I'm a beginner to pretty much everything programming :)
Thanks!
This is the main part of my code (inside client folder):
Template.hello.events({
"submit form": function (event) {
event.preventDefault();
var reader = new FileReader();
reader.onload = function (event) {
var csv = Papa.parse(this.result, {header: true});
var count = 0;
_.forEach(csv.data, function (csvPerson) {
count++;
Person.insert(csvPerson);
console.log('Inserting: ' + count + ' -> ' + csvPerson.FirstName);
});
};
reader.readAsText(event.target[0].files[0]);
}
});
The last few lines of console output:
Inserting: 39997 -> Joan
Inserting: 39998 -> Sydnee
Inserting: 39999 -> Yael
Inserting: 40000 -> Kirk
The last few lines of CSV (random generated data):
Jescie,Ayala,27/10/82,"P.O. Box 289, 5336 Tristique Road",Dandenong,7903,VI,mus.Proin#gravida.co.uk
Joan,Petersen,01/09/61,299-1763 Aliquam Rd.,Sydney,1637,NS,sollicitudin#Donectempor.ca
Sydnee,Oliver,30/07/13,Ap #648-5619 Aliquam Av.,Albury,1084,NS,Nam#rutrumlorem.ca
Yael,Barton,30/12/66,521 Auctor. Rd.,South Perth,2343,WA,non.cursus.non#etcommodo.co.uk
Kirk,Camacho,25/09/08,"Ap #454-7701 A, Road",Stirling,3121,WA,dictum.eu#morbitristiquesenectus.com
The hello template is a simple form obviously, just file select and submit.
Client code is under client directory.
Person defined in a file in application root.
CSV parsed as strings for now, to avoid complexity.
The records inserted look fine, retrieve by name, whatever.
Person.find().count() browser-side in console results in 40000.
Happy to send the file, which is only 1.5MB and it's random data - not sensitive.
I think call() should work as follows:
On client side
Meteor.call("insertMethod",csvPerson);
And method on server side
insertMethod: function(csvPerson){
Person.insert(csvPerson);
}
In Meteor, on some scenarios, if you don't pass a callback the operation will sync.
If you run the code Person.insert(csvPerson); on the server, the operation will be sync not async. Depending on what you want to do, you might have serious problems in the future. On the client, it won't be sync but async.
Since node.js is an event-based server, a single sync operation can halt the entire system. You've to be really about your sync operations.
For importing data, the best option is to do at server-side inside Meteor.startup(function(){ //import code goes here}).
The solution propose by Sindis works but it slow and if the browser closes (for some reason), you're not keeping a track of the already inserted records. If you use Meteor.call("insertMethod",csvPerson);, this operation will be sync on the client.
The best option on your beginner scenario (not optimal) is to:
1- While (You have record to insert)
2- Call Meteor.call without a callback
3- Count all the inserted fields in the Collection
4- Save this value to localStorage
5- Go back to step 1
This works assuming that the order of insertion is the same on every insert attempt. If you browser fails, you can always get the value from localStorage and skip that number of records.

WordPress Write Cache Issue with Multiple Sessions

I'm working on a content dripper custom plugin in WordPress that my client asked me to build. He says he wants it to catch a page view event, and if it's the right time of day (24 hours since last post), to pull from a resource file and output another post. He needed it to also raise a flag and prevent other sessions from firing that same snippet of code. So, raise some kind of flag saying, "I'm posting that post, go away other process," and then it makes that post and releases the flag again.
However, the strangest thing is occurring when placed under load with multiple sessions hitting the site with page views. It's firing instead of one post -- it's randomly doing like 1, 2, or 3 extra posts, with each one thinking that it was the right time to post because it was 24 hours past the time of the last post. Because it's somewhat random, I'm guessing that the problem is some kind of write caching where the other sessions don't see the raised flag just yet until a couple microseconds pass.
The plugin was raising the "flag" by simply writing to the wp_options table with the update_option() API in WordPress. The other user sessions were supposed to read that value with get_option() and see the flag, and then not run that piece of code that creates the post because a given session was already doing it. Then, when done, I lower the flag and the other sessions continue as normal.
But what it's doing is letting those other sessions in.
To make this work, I was using add_action('loop_start','checkToAddContent'). The odd thing about that function though is that it's called more than once on a page, and in fact some plugins may call it. I don't know if there's a better event to hook. Even still, even if I find an event to hook that only runs once on a page view, I still have multiple sessions to contend with (different users who may view the page at the same time) and I want only one given session to trigger the content post when the post is due on the schedule.
I'm wondering if there are any WordPress plugin devs out there who could suggest another event hook to latch on to, and to figure out another way to raise a flag that all sessions would see. I mean, I could use the shared memory API in PHP, but many hosting plans have that disabled. Can't use a cookie or session var because that's only one single session. About the only thing that might work across hosting plans would be to drop a file as a flag, instead. If the file is present, then one session has the flag. If the file is not present, then other sessions can attempt to get the flag. Sure, I could use the file route, but it's kind of immature in my opinion and I was wondering if there's something in WordPress I could do.
The key may be to create a semaphore record in the database for the "drip" event.
Warning - consider the following pseudocode - I'm not looking up the functions.
When the post is queried, use a SQL statement like
$ts = get_time_now(); // or whatever the function is
$sid = session_id();
INSERT INTO table (postcategory, timestamp, sessionid)
VALUES ("$category", $ts, "$sid")
WHERE NOT EXISTS (SELECT 1 FROM table WHERE postcategory = "$category"
AND timestamp < $ts - 24 hours)
Database integrity will make this atomic so only one record can be inserted.
and the insertion will only take place if the timespan has been exceeded.
Then immediately check to see if the current session_id() and timestamp are yours. If they are, drip.
SELECT sessionid FROM table
WHERE postcategory = "$postcategory"
AND timestamp = $ts
AND sessionid = "$sid"
The problem goes like this with page requests even from the same session (same visitor), but also can occur with page requests from separate visitors. It works like this:
If you are doing content dripping, then a page request is probably what you intercept with add_action('wp','myPageRequest'). From there, if a scheduled post is due, then you create the new post.
The post takes a little bit of time to write to the database. In that time, a query on get_posts() may not see that new record yet. It may actually trigger your piece of code to create a new post when one has already been placed.
The fix is to force WordPress to flush the write cache appears to be this:
try {
$asPosts = array();
$asPosts = # wp_get_recent_posts(1);
foreach($asPosts as $asPost) {break;}
# delete_post_meta($asPost['ID'], '_thwart');
# add_post_meta($asPost['ID'], '_thwart', '' . date('Y-m-d H:i:s'));
} catch (Exception $e) {}
$asPosts = array();
$asPosts = # wp_get_recent_posts(1);
foreach($asPosts as $asPost) {break;}
$sLastPostDate = '';
# $sLastPostDate = $asPost['post_date'];
$sLastPostDate = substr($sLastPostDate, 0, strpos($sLastPostDate, ' '));
$sNow = date('Y-m-d H:i:s');
$sNow = substr($sNow, 0, strpos($sNow, ' '));
if ($sLastPostDate != $sNow) {
// No post today, so go ahead and post your new blog post.
// Place that code here.
}
The first thing we do is get the most recent post. But we don't really care if it's not the most recent post or not. All we're getting it for is to get a single Post ID, and then we add a hidden custom field (thus the underscore it begins with) called
_thwart
...as in, thwart the write cache by posting some data to the database that's not too CPU heavy.
Once that is in place, we then also use wp_get_recent_posts(1) yet again so that we can see if the most recent post is not today's date. If not, then we are clear to drip some content in. (Or, if you want to only drip in like every 72 hours, etc., you can change this a little here.)

Resources