IPCS message passing related queries - unix

I am dealing with Message Passing IPCS method. I do have few question regarding this:
KEY field in ipcs -q shows me 0x00000000 what does this means ?
Can i see what messsage is passes using msqid ?
If two entries are present (for a particular user) after executing command ipcs -q. Does this means that two messages were passed by this particular user ?
If used-bytes and message fields are set as 0 what does this mean?
Is there away to see if message queue is full or not?
How many queues can we have for one particular user?
I tried goggling, but was not able to find answer to these questions.
Please help

1. The "key" field of the Shared memory segments is usually 0x00000000. This indicates the IPC_PRIVATE key specified during creation of the shared memory segment. The manual of shmget() contains more details.
2. AFAIK, this cannot be done. If any msg is "de-queued" from the msgQ, then the intended receiver will not see it.
3. The 2 entries in the list of message queues indicates that there are currently 2 active message queues on the system identified by their corresponding unique keys.
Creating additional msgQ : ipcmk -Q
Deleting an existing msgQ : ipcrm -Q <unique-key>
4. The used-bytes and messages fields set to 0 indicate that currently no transfers have occurred using that particular msgQ.
5. Currently one way to do this to obtain the number of msgs currently queued-up in the msgQ programmatically as shown in the following C snippet. Next this can be compared with the size of the msgQ as demonstrated in this answer.
int ret = msgctl(msqid, IPC_STAT, &buf);
uint msg = (uint)(buf.msg_qnum);
printf("msgs in Q = %u\n", msg);
6. There exists a limit on the total memory used by all the msgQs on the system combined together. This can be obtained by ulimit -q. The amount of bytes used in a msgQ is listed under the used-bytes column in the output of ipcs -Q. The total number of msgQs is limited only by the amount of memory available to create a new msgQ from the msgQ memory pool limit seen above.
Also checkout the latter part of this answer for a few sample operations on POSIX message queues.

Related

Getting data from a direct mapped cache in prolog

The predicate getDataFromCache(StringAddress,Cache,Data,HopsNum,directMap,BitsNum)
should succeed when the Data is successfully retrieved from the Cache (cache hit)
and the HopsNum represents the number of hops required to access the data from
the cache which can differ according to direct map cache mapping technique such
that:
• StringAddress is a string of the binary number which represents the address
of the data you are required to address and it is six binary bits.
• Cache is the cache using the representation discussed previously .
• Data is the data retrieved from cache when cache hit occurs.
• HopsNum the number of hops required to access the data from the cache.
• BitsNum The BitsNum is the number of bits the index needs.
getDataFromCache is always giving me false although everythings seems working so I want someone to fix it
convertAddress(Binary,N,Tag,Idx,directMap):-
Idx is mod(Binary,10**N),
Tag is Binary // 10**N.
getDataFromCache(SA,[item(tag(T),data(D),V,_)|T],Data,HopsNum,directMap,BitsNum):-
convertAddress(SA,BitsNum,Tag,Idx,directMap),
number_string(Tag,Z),
Z==T,
V==1,
Data is D.
getDataFromCache(SA,[item(tag(T),data(D),V,_)|T],Data,HopsNum,directMap,BitsNum):-
convertAddress(SA,BitsNum,Tag,Idx,directMap),
number_string(Tag,Z),
(Z\=T;V==0),
getDataFromCache(SA,T,Data,HopsNum,directMap,BitsNum).
simply hopsNumber is always zero
and you don't have to traverse since it's direct
you can access it using nth0 perdicate
Also you are using the T variable twice

Understanding elasticsearch circuit_breaking_exception

I am trying to figure out why I am getting this error when indexing a document from a python web app.
The document in this case is a base64 encoded string of a file of size 10877 KB.
I post it to my web app, which then posts it via elasticsearch.py to my elastic instance.
My elastic instance throws an error:
TransportError(429, 'circuit_breaking_exception', '[parent] Data
too large, data for [<http_request>] would be
[1031753160/983.9mb], which is larger than the limit of
[986932838/941.2mb], real usage: [1002052432/955.6mb], new bytes
reserved: [29700728/28.3mb], usages [request=0/0b,
fielddata=0/0b, in_flight_requests=29700728/28.3mb,
accounting=202042/197.3kb]')
I am trying to understand why my 10877 KB file ends up at a size of 983mb as reported by elastic.
I understand that increasing the JVM max heap size may allow me to send bigger files, but I am more wondering why it appears the request size is 10x the size of what I am expecting.
Let us see what we have here, step by step:
[parent] Data too large, data for [<http_request>]
gives the name of the circuit breaker
would be [1031753160/983.9mb],
says, how the heap size will look, when the request would be executed
which is larger than the limit of [986932838/941.2mb],
tells us the current setting of the circuit breaker above
real usage: [1002052432/955.6mb],
this is the real usage of the heap
new bytes reserved: [29700728/28.3mb],
actually an estimatiom, what impact the request will have (the size of the data structures which needs to be created in order to process the request). Your ~10MB file will probably consume 28.3MB.
usages [
request=0/0b,
fielddata=0/0b,
in_flight_requests=29700728/28.3mb,
accounting=202042/197.3kb
]
This last line tells us how the estmation is being calculated.

Count unread SMS using AT command

How to count unread SMS or revived SMS using AT command?
void UnreadMEssage() {
fonaSS.println("AT+CMGF=0");
delay(1000);
fonaSS.println("AT+CMGL=\"REC UNREAD\",1");
}
Using this code, I can show the all received text messages, but I want to count the unread SMS.
Answering in reference to this blog :
There is no direct command to count the number of unread messages . We can use AT+CMGL command in a modified way to count unread messages .
Use the command AT+CPMS? to find out how many messages are stored in your SIM in total.
Use AT+GMGL=<stat> for each status other than 0 "REC UNREAD" and count the number of messages for each of these.
Add each of these counts together and subtract that from the total memory used as reported by +CPMS and you've got the number of unread messages.
P.B : If you don't mind "reading" the messages just do the +CMGL for status 0 "REC UNREAD" and count, i.e those messages will be marked as read.
The AT command +CPMS (Preferred Message Storage) is used to figure out,
Select the message storage area that will be used when sending, receiving, reading, writing or deleting SMS messages.
Find the number of messages that are currently stored in the message storage area.
Find the maximum number of messages that can be stored in the message storage area.
try AT+CPMS? to list the available spaces. and issue a command like this,
AT+CPMS="SM","SM","SM"
the response should show used space,available space repeatedly like this,
+CPMS: "SM",19,20,"SM",19,20,"SM",19,20
here is SM signifies SIM card space and a list of available options is below. A typical response shows,
+CPMS: used1,max1,used2,max2,used3,max3
Based on the count, read each response using AT+CMGR=x (x is the index of the message) and parse the response for "REC UNREAD" and subtract to get read messages.
PS:
here, SM is storage area in SIM card. others are,
SM. msg storage area on the SIM card.
ME. msg storage area on the GSM/GPRS modem or device. A larger storage
space than SIM card(SM).
MT. all msg storage areas associated with the your modem or device.
BM. broadcast message storage area.
(In some devices, ME & MT -> Flash message storage.)
SR. It refers to the status report message storage area. It is used to
store status reports.
TA. Terminal Adaptor msg storage area.

Yet another catch 22 in BizTalk generating 999

I have asked another question with almost same scenario, A catch 22 in generating 999 file
Basically, I am inbounding HIPPA 837 files and am required to generate 999 response file.
Today I inbounded a file with ST02 element missing.
The TA1 created with Accept status, cause it only cares ISA-IEA level and that part is good.
BizTalk inbounded the file, found the issue, and actually generated a 999 message, but it failed to send out as a physical file because:
Unable to read the stream produced by the pipeline.
Details: Error: 1 (Field level error)
SegmentID: AK2
Position in TS: 3
Data Element ID: AK202
Position in Segment: 2
Data Value:
1: Mandatory data element missing
So here's the catch 22: A 999 should be created to report error for this incoming 837 file, The 999's AK202 is a required field reference to incoming file's transactionnumber defined in ST02.
And the error of the incoming file is it is missing this ST02.
Now, for this scenario, it ends up with an accept TA1 and a pending message in BizTalk messageBox.
In our trading partners view, they send a file and ONLY get a TA1 response with accept status.
My question goes here:
1. Which is the right file to report this kind of error (ST02 missing), TA1 or 999?
Is there anyway to bypass this error and have the 999 created?
There's an RFI on this at x12.org: http://rfi.x12.org/Request/Details/55?stateViewModel=WPC.RFI.Models.ViewModels.RequestViewModel
The TLDR version: you should reject the entire functional group, and use the control identifier from the functional group in AK202.
Here's the relevant text:
Description
What Segments/Data Elements should be used in the 997 when reporting an error in ST02 (Transaction Set Control Number) when the error is related to syntax or min/max? If you attempt to create a 997 back to the submitter with the inbound data from ST02 in AK202 of the 997 you would be creating an invalid 997 transaction. It appears there may be a gap in the 997 standard for reporting errors at this level. If we have misinterpreted the use of the transaction and it can be reported, please let us know how.
Response
Data elements AK102 and AK202 located within transaction set 997 and transaction set 999 are to be used to convey the values of control numbers in the functional group or transaction sets being acknowledged. If including a copy of the value of a data element in the 997 or 999 would cause a syntax violation in the 997 or 999, then if the violation is to be reported at the level at which it was found it must be reported at the next higher level.
Recommendation
The official response to a formal RFI is a letter from the current ASC X12 chair. This website often displays a summary of the RFI. Click here to view a PDF of the letter for this RFI.
When reporting errors after the syntactic analysis of the transaction set, the data analyzed must be able to be reported within the acknowledgment. While data element AK404 supports reporting the value of a data element that fails syntactic analysis without violating the syntax of the 997, the same does not apply to AK202. There are two generally accepted methods of acknowledging transaction sets: 1) acknowledge all transaction sets within the functional group or 2) acknowledge only those transaction sets containing errors. It is not recommended to accept a functional group with errors if the transaction set control number in error cannot be reported in AK202. For the example in your request, the appropriate action is to reject the entire functional group containing the ST02 value which when echoed in AK202 would create a syntactically invalid 997. In addition, the same logic applies to the functional group control number;, the appropriate action is to reject the entire interchange containing the syntactically invalid data.

Dynamics AX 2009 AIF Tables

Background
I have an issue where roughly once a month the AIFQueueManager table is populated with ~150 records which relate to messages which had been sent to AX (where they "successfully failed"; i.e. errorred due to violation of business rules, but returned an exception as expected) over 6 months ago.
Question
What tables are involved in the AIF inbound message process / what order to events occur in? e.g. XML file is picked up and recorded in the AifDocumentLog, data's extracted and added to the AifQueueManager and AifGatewayQueue tables, records from here are then inserted in the AifMessageLog, etc.
Thanks in advance.
There are 4 main AIF classes, I will be talking about the inbound only, and focusing on the included file system adapter and flat XML files. I hope this makes things a little less hazy.
AIFGatewayReceiveService - Uses adapters/channels to read messages in from different sources, and dumps them in the AifGatewayQueue table
AIFInboundProcessingService - This processes the AifGatewayQueue table data and sends to the Ax[Document] classes
AIFOutboundProcessingService - This is the inverse of #2. It creates XMLs with relevent metadata
AIFGatewaySendService - This is the inverse of #1, where it uses adapters/channels to send messages out to different locations from the AifGatewayQueue
For #1
So #1 basically fills the AifGatewayQueue, which is just a queue of work. It loops through all of your channels and then finds the relevant adapter by ClassId. The adapters are classes that implement AifIntegrationAdapter and AifReceiveAdapter if you wanted to make your own custom one. When it loops over the different channels, it then loops over each "message" and tries to receive it into the queue.
If it can't process the file for some reason, it catches exceptions and throws them in the SysExceptionTable [Basic>Periodic>Application Integration Framework>Exceptions]. These messages are scraped from the infolog, and the messages are generated mostly from the receive adaptor, which would be AifFileSystemReceiveAdapter for my example.
For #2
So #2 is processing the inbound messages sitting in the queue (ready/inprocess). The AifRequestProcessor\processServiceRequest does the work.
From this method, it will call:
Various calls to Classes\AifMessageManager, which puts records in the AifMessageLog and the AifDocumentLog.
This key line: responseMessage = AifRequestProcessor::executeServiceOperation(message, endpointActionPolicy); which actually does the operation against the Ax[Document] classes by eventually getting to AifDispatcher::callServiceMethod(...)
It gets the return XML and packages that into an AifMessage called responseMessage and returns that where it may be logged. It also takes that return value, and if there is a response channel, it submits that back into the AifGatewayQueue
AifQueueManager is actually cleared and populated on the fly by calling AifQueueManager::createQueueManagerData();.

Resources