I am writing an web-application which needs to receive e-mail messages to users' internal email addresses, let administrators approve them, and then forward to corresponding user's external mailbox.
I have installed and configured postfix for message receiving task. It uses virtual e-mail addresses, and my existing database where user email addresses are stored. Local email storage is maildir and I use postfix's virtual MDA.
Basically, I would like to execute a script every time a new message is received, and for which user (maildir message id would be very helpful too). Then I could read the message from python code (python had a module for maildir messageboxes) and insert it in database.
I can think of three ways to do this:
iterate user maildirs and check
if there are any new messages, but it would be ineffective for large number of users.
use dbmail and then check if there are any new messages in database (this would be quicker, but I'd have to configure everything from scratch). Besides, existing user data tables cannot be used.
write a wrapper around maildrop/virtual to save message in db and in maildir as well, but I'd need a way to check if received message is valid and successfully saved by the real MDA.
Any suggestions appreciated!
In the /etc/aliases file you can specifiy a program which gets executed everytime a user recieves a mail. This program gets the mail on stdin. So you dont have to poll and your program gets run instantly.
In response to my own question, I used postfix content_filter with X flag set in pipe and process receiving address and message manually. Since I didn't need to access messages in maildir, this approach works fine for me.
Related
I am not using real-time asterisk , But still astdb.sqlite3 contains entries of online peers with Reg.Contact information in SIP/registry/peer. key . I would like to store contact information of all peers as they come online in a separate persistent database. I need this for sending push notifications by fetching deviceID etc information in registration contact .
I tried to pull this information from astdb.sqlite3 but the entries are clearing off as soon as devices go offline .Though I am able to fetch the information with "sip show peer XXXX" in asterisk CLI , It is overburdened to fetch every time like this . Instead I want to save only Regcontact information for all the devices in a database ( without realtime) as the devices come online. The other way I tried to pull the information is using AMI event listener. But with AMI I don't see complete information like contact information It displays only below information
Event: PeerStatus
Privilege: system,all
SequenceNumber: 75
File: manager.c
Line: 1856
Func: manager_default_msg_cb
ChannelType: SIP
Peer: SIP/2030
PeerStatus: Reachable
Can someone suggest a better way to push Only Regcontact information to a database as the devices come online .
There are no mechanism like that in asterisk.
You can use kamailio or write patch similar to this one https://reviewboard.asterisk.org/r/4490/
It sounds like you have dynamic IPs for your endpoints, and you want a way to update a separate DB as soon as a device registers with an IP/port pair.
If you enable the security log, you will see all auth events, including the "SuccessfulAuth" event, which includes the RemoteAddress of the endpoint (including port and protocol).
Here is an example line:
[Jul 21 19:53:45] SECURITY[1342] res_security_log.c: SecurityEvent="SuccessfulAuth",EventTV="2020-07-21T19:53:45.182+0000",Severity="Informational",Service="SIP",EventVersion="1",AccountID="102",SessionID="0x7f41040132c0",LocalAddress="IPV4/UDP/10.0.0.200/5060",RemoteAddress="IPV4/UDP/10.0.0.75/5062",UsingPassword="1"
If all you're looking for is AccountID="102" and RemoteAddress="IPV4/UDP/10.0.0.75/5062", a very fast/cheap way to get it is to enable the security log, and use a script to tail it and update your DB as soon as the event occurs. I like to keep the security log on anyways for utilities like fail2ban. Just make sure your script is able to reopen the file each time it is rotated.
Edit:
By default the log is in /var/log/asterisk. To enable it, edit /etc/asterisk/logger.conf and un-comment (or create) the line under [logfiles] that says security => security.
To start off, I'd like to state that this is my first dive into Asterisk related applications, and that I'm mostly a web developer.
My workplace uses an MSP that installed Asterisk/FreePBX to manage our phone systems. The GUI is pretty intuitive and after reading and getting a bit lost I figured I'd come here and see how to go about setting this up.
I was tasked with building a simple application to reset user passwords through both a web interface (completed) and a phone interface - by dialing a number, dialing their ID card #, and then having their password reset. I'm a Systems Administrator and have access to all necessary applications, servers, etc. I can pick things up fairly easy and I was told I'd have enough time to figure this out and get it done.
This is what I need in terms of pseudocode when the user calls a specific extension:
recording('pwResetCardID'); // Play a "Please enter your ID # to reset PW" greeting.
function getCardID() {
cardID = input(); // Input 4-5 digits using the dialpad and save it to a var.
verify = get('http://some.site/endpoint/cardid/'.$cardid); // Send a GET request.
if verify { // If we got a successful response (200)
recording('pwChanged'); // Tell the user their password has changed
} else { //
recording('errorCardID'); // Otherwise tell them to try again
getCardID(); // Recur the function.
}
}
getCardID();
If the cardID is valid, their PW is changed on the other end of my node.js application, and I simply need the GET request to be sent out and the user notified of the success (or failure)
You can start from doc describing asterisk dialplan
Probably need use func_CURL, Read application, Playbavk and Goto
You need put new dialplan in extensions_custom.conf and setup use it via custom apps module
I want to create a send port that writes all messages going in and out of BizTalk to file.
My organization is using Splunk. Splunk will import data from the file directory to make sense of the various messages.
Is it possible to create filter in a send port that subscribes to "everything"? I could solve this by applying filter for each message type in my system. However, there is a lot of messages going back and forth and I'm wondering if there a simpler solution?
I'm using BizTalk 2013.
Yes, just filter on message type like you said, but rather than selecting = and specifying the message type, just select Exists. That will then match any message that has a message type.
EDIT:
As Johns-305 has pointed out if you have any messages that don't have a message type (e.g. pass through receive locations) you may want to pick BTS.MessageID as that will always exist for a message in the message box.
I'm working on creating small relay stations out of tiny embedded Linux boxes. They have some sensors connected to them and transport data back to a server via HTTP POST. Right now the server just accepts their message, along with a unique ID (the MAC address of eth0).
I want to expand this to include some type of security. I want to be able to deploy these little devices with minimal configuration. I'd like to copy a base firmware to the device, hook them up in the field, and they self-register. The first time they connect, I'd like the server and device to do some type of negotiation where I can store a fingerprint. Subsequent requests I could then authentication/verify the device using that fingerprint.
That way, once a device registers with its unique ID, I can be assured all data from that ID is from the same device. If a rouge device or set of devices does register, I'll just delete them (I store IPs to so I can delete by unknown ranges and block them).
My question is what's the best way to go about doing this? I think back to the idea of SSH fingerprints, where the first time you connect to a server you get a server fingerprint. If a future request yields a different fingerprint, you get a huge warning and have to manually delete the fingerprint out of your authorized_keys file if the server's keys have actually regenerated (e.g. you did a reinstall without saving your old SSH keys).
Is something like this possibly with HTTP, possibly avoiding having to use preshared keys?
If it matters, the clients are running Python2 and the server they connect to is written mostly in Scala on Tomcat.
Basically, all you need to do is tell the server the public key, and then sign all of your messages with it. If you don't want pre-shared keys, then the server cannot be assured that someone new who is registering is actually one of your devices. You can still validate that the message came from the same device that originally registered with that identifier, however.
The process basically goes like this:
Client generates a new key pair (e.g. an RSA public/private key pair).
Client registers with server, sending its public key. The server stores this public key.
When the client sends a message, it generates a signature of its message, which it attaches to the message. When the server receives the message, it validates the signature to ensure that the message was sent by someone holding the corresponding private key.
The code for this in PyCrypto goes something like this:
Generate key pair
from Crypto.PublicKey import RSA
key = RSA.generate(2048)
private_key = key.exportKey()
public_key = key.publickey().exportKey()
# private_key is a string suitable for storing on disk for retrieval later
# public_key is a string suitable for sending to the server
# The server should store this along with the client ID for verification
Generate signature
from Crypto.PublicKey import RSA
from Crypto.Hash import SHA
key = RSA.importKey(private_key)
# where private_key is read from wherever you stored it previously
digest = SHA.new(message).digest()
signature = key.sign(digest, None)
# attach signature to the message however you wish
The server should load the public key as it has previously stored, and use a "verify" method provided by the Scala/Java crypto API you use, and accept the message only if it succeeds.
It is important to understand the caveats of each approach, as various techniques only protect against certain types of attacks. For instance, the above approach does not protect against a "replay attack", in which an attacker records a message with a certain meaning and then re-transmits it to the server at a later time. One way of protecting against this would be to include a timestamp in the message which is hashed; another would be to use an appropriately encrypted transport (e.g. SSL/TLS).
We have a vendor that sends CSV files as email attachments. These CSV files contain statuses that are imported into our application. I'm trying to automate the process end-to-end, but it currently depends on someone opening an email, saving the attachment to a server share, so the application can use the file.
Since I cannot convince the vendor to change their process, such as offering an FTP location or a Web Service, I'm stuck with trying to automate the existing process.
Does anyone know of a way to programmatically open an email from a POP3 account and extract an attachment? The preferred solution would reside on a Windows 2003 server, be written VB.NET and secure. The application can reside on the same server as the POP3 server, for example, we could setup the free POP3 server that comes with Windows Server and pull against the mail file stored on the file system.
BTW, we are willing to pay for an off-the-shelf solution, if one exists.
Note: I did look at this question but the answer points to a CodeProject solution that doesn't deal with attachments.
Try Mail.dll email component, it's very affordable, supports attachments national characters and is easy to use, it also supports SSL:
Using pop3 As New Pop3()
pop3.Connect("mail.server.com")
pop3.Login("user", "password")
Dim builder As New MailBuilder()
For Each uid As String In pop3.GetAll()
' Receive email message'
Dim mail As IMail = builder.CreateFromEml(pop3.GetMessageByUID(uid))
'Write out received message'
Console.WriteLine(mail.Subject)
'Here you can use mail.Attachmets collection'
For Each attachment As MimeData In mail.Attachments
Console.WriteLine(attachment.FileName)
attachment.Save("c:\" + attachment.FileName)
' you can also use attachment.Data here'
Next attachment
Next
pop3.Close(true)
End Using
You can download it here: http://www.lesnikowski.com/mail.
possible duplication of Reading Email using Pop3 in C#
Atleast, there's a shed load of suggestions there that you may find useful
I'll throw in a late suggestion for a more generalized "download POP3 messages and extract attachments" solution using existing software and minimal programming. I needed to do this for a client who switched to receiving faxes via email and was not pleased with manually saving the attachments to a location where they could be imported into an application.
For downloading messages on *nix systems fetchmail seems to be the standard and is very capable, but I chose mpop for both simplicity and Windows compatibility (but it is cross-platform). If mpop hadn't done the trick for me, I probably would have ended up doing something with the Python-based getmail, which was created when fetchmail's development stalled for a time (it's since resumed).
Mpop is controlled either via command line or configuration file, so I simply created multiple configuration files and specify via command line which file to load. I'm using it in "Exchange pickup directory" mode, which means it simply downloads the messages and drops them as text (.eml) files in a specified directory.
For extraction of the message attachments, UUDeview appears to be the standard (I'm using the Windows port of UUDeview) across just about any system you could want with just about any features you could want. My main alternative to this was a much-less-capable Python script that I'd developed for a different client back in 2007, but I'm happy to go with a precompiled executable over either installing Python or packaging with any of the Python-to-exe options.
Finally there's the configuration - along with the two mpop configuration files mentioned above (which I could do away with by using command-line options), I also have two 2-line .cmd files launched every 10 minutes by scheduled task - the first line to launch mpop to download into a working directory and the second line to launch UUDeview and extract attachments of specified types (.pdf or .tif) then delete each file from which it extracted attachments. Output is sent to another directory from which staff can directly attach files as needed.
This is overall not the most elegant way to reach these ends, but it was quick, simple, functional and reasonably robust - at each stage if something goes wrong it fails such that no data is lost. The only places where data could be lost are any non-attachment messages being sent to the dedicated fax email addresses, and even those will sit in the processing directory and be caught eventually.