Qmail : auto response is polluted with email headers - qmail

When I set up auto response in qmailadmin for an account. The auto response email is sent but just after the auto response message, there is a sort of stack trace and headers of the original message:
like :
Received: (qmail 903 invoked by uid 508); 12 Jul 2010 20:23:55 -0000
Received: blabla
Received: (qmail 914 invoked from network); 12 Jul 2010 20:31:44 -0000
Received: from blabla (ip)
somemore trace like DKIM-Signature, etc...
*original message headers*
*original message content*
Is there a way to not have all this junk in the body of the email ? My users are not very happy with it... (It used to work well in the past, and I didn't change any thing recently)
If you have an idea of what happens, you are answers are welcome !
thanks !

I assume, that you are using the qmail-autoresponder. To check what is going wrong, I need two additional informations at the moment:
1) Content of the .qmail file with the responder
There should be something like this inside:
|qmail-autoresponder /path/to/autorespond/dir
2) The content of the autorespond directory - especially the message.txt
You can check, if the message.txt is in a correct MIME-format. It should look something like this:
From: "Sender of Mail" <email#yourhost.de>
User-Agent: autoresponder-system
Return-Path: <>
Subject: Re: %S
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Sehr geehrte Damen und Herren,
Diese Emailadresse wird in den kommenden Wochen abgeschaltet. Wir m=F6chten
Sie bitten, sich zuk=FCnftig an Ihren neuen Ansprechpartner ...

Finally I made it works be updating autorespond to the latest version (2.0.2) I don't know what version I had ...

autorespond: usage: time num message dir [ flag arsender ]
optional parameters:
flag - handling of original message:
0 - append nothing
1 - append quoted original message without attachments
Add 0 in .qmail file:
/home/vpopmail/domains/mydomain.com/myuser/Maildir/ | /usr/local/bin/autorespond 86400 3 /home/vpopmail/domains/mydomain.com/myuser/vacation/message /home/vpopmail/domains/mydomain.com/myuser//vacation 0

Related

Karate fails to close earlier started Crome processes

In my project I use Karate 0.9.5, Oracle JDK8.
From time to time in our pipeline I see a problem. Karate fails to close earlier started Crome processes. Is there any solution to this problem?
I try to solved by explicitly call close() and quit() it didn't help. After running process was finished I found a couple of active chrome process on the server.
Here's a log:
12:51:45.227 preferred port 9222 not available, will use: 52436
12:51:47.524 request:
1 > GET http://localhost:52436/json
1 > Accept-Encoding: gzip,deflate
1 > Connection: Keep-Alive
1 > Host: localhost:52436
1 > User-Agent: Apache-HttpClient/4.5.11 (Java/1.8.0_181)
12:51:47.584 response time in milliseconds: 58,31
1 < 200
1 < Content-Length: 361
1 < Content-Type: application/json; charset=UTF-8
[ {
"description": "",
"devtoolsFrontendUrl": "/devtools/inspector.html?ws=localhost:52436/devtools/page/4BD6A5C19E01B01D88995CD69367F81F",
"id": "4BD6A5C19E01B01D88995CD69367F81F",
"title": "",
"type": "page",
"url": "about:blank",
"webSocketDebuggerUrl": "ws://localhost:52436/devtools/page/4BD6A5C19E01B01D88995CD69367F81F"
} ]
12:51:47.584 root frame id: 4BD6A5C19E01B01D88995CD69367F81F
12:51:47.632 >> {"method":"Target.activateTarget","params":{"targetId":"4BD6A5C19E01B01D88995CD69367F81F"},"id":1}
12:51:47.638 << {"id":1,"result":{}}
12:51:47.639 >> {"method":"Page.enable","id":2}
12:51:47.883 << {"id":2,"result":{}}
12:51:47.885 >> {"method":"Runtime.enable","id":3}
12:51:47.889 << {"method":"Runtime.executionContextCreated","params":{"context":{"id":1,"origin":"://","name":"","auxData":{"isDefault":true,"type":"default","frameId":"4BD6A5C19E01B01D88995CD69367F81F"}}}}
12:51:47.890 << {"id":3,"result":{}}
12:51:47.890 >> {"method":"Target.setAutoAttach","params":{"autoAttach":true,"waitForDebuggerOnStart":false,"flatten":true},"id":4}
12:51:47.892 << {"id":4,"result":{}}
12:52:02.894 << timed out after milliseconds: 15000 - [id: 4, method: Target.setAutoAttach, params: {autoAttach=true, waitForDebuggerOnStart=false, flatten=true}]
12:52:02.917 driver config / start failed: failed to get reply for: [id: 4, method: Target.setAutoAttach, params: {autoAttach=true, waitForDebuggerOnStart=false, flatten=true}], options: {type=chrome, showDriverLog=true, httpConfig={readTimeout=60000}, headless=true, target=null}
I think the easiest way to solve it is to extend root web driver interface com.intuit.karate.driver.Driver by adding additional method like getPID(). This method must return PID of launched process.
Can you log a bug and also provide some information as to what happened before this, are you using the Docker image etc. It looks like the previous Scenario did not shut down clearly.
But before that do you mind upgrading to 0.9.6.RC3 (just released) and trying that ? There have been a couple of tweaks to the life-cycle especially trying to close Chrome without waiting for a response. Also it may be worth adding a couple of changes a) Target.setAutoAttach without waiting and b) add configure option to not start Chrome if the port is in use

R Script Web API Post Method

I have this code to get a response from a Web API
require("httr")
r <- POST("https://example.com/API/Bulk/2.0/Contacts/exports/",
config = list("Authorization"="Basic abc==","Accept"="application/json",
"body"= "{ 'name':'JB Test - Contact Lead Score Export','fields':{'ID':'{{Contact.Field(ContactIDEXT)}}'}}"))
r
I get the following error message:
Response [example.com/API/Bulk/2.0/Contacts/exports/] Date: 2018-03-02 12:49 Status: 400 Content-Type: application/json; charset=utf-8 Size: 175 B { "failures":[{"field":"name","constraint":"Must be a string value, at least 1 character and at most 100 characters long."},{"field":"fields","constraint":"Is required."}]}
I am getting failure message.
Whats wrong in this code?

Riak db returns 500 Internal server error

I seem to be having an issue with our riak-kv db v2.2.3, we are currently running 5 node cluster, out of nowhere last week suddenly requests to our db started returning 500 internal server error for all/almost every request (also might be valuable to mention our db has nearly doubled in size over the past 2 weeks.) At first, we thought the issue was in the code making the requests however after sshing into one of the nodes in the cluster and attempting a simple request to list buckets we saw this:
Command:
curl -i http://localhost:8098/buckets?buckets=true
Response:
HTTP/1.1 500 Internal Server Error
Vary: Accept-Encoding
Server: MochiWeb/1.1 WebMachine/1.10.9 (cafe not found)
Date: Sun, 10 Dec 2017 06:21:20 GMT
Content-Type: text/html
Content-Length: 1193
<html><head><title>500 Internal Server Error</title></head><body>
<h1>Internal Server Error</h1>The server encountered an error while
processing this request:<br><pre>{error,
{error,
{badmatch,{error,mailbox_overload}},
[{riak_kv_wm_buckets,produce_bucket_list,2,
[{file,"src/riak_kv_wm_buckets.erl"},{line,225}]},
{webmachine_resource,resource_call,3,
[{file,"src/webmachine_resource.erl"},{line,186}]},
{webmachine_resource,do,3,
[{file,"src/webmachine_resource.erl"},{line,142}]},
{webmachine_decision_core,resource_call,1,
[{file,"src/webmachine_decision_core.erl"},{line,48}]},
{webmachine_decision_core,decision,1,
[{file,"src/webmachine_decision_core.erl"},{line,562}]},
{webmachine_decision_core,handle_request,2,
[{file,"src/webmachine_decision_core.erl"},{line,33}]},
{webmachine_mochiweb,loop,2,
[{file,"src/webmachine_mochiweb.erl"},{line,72}]},
{mochiweb_http,headers,5,
[{file,"src/mochiweb_http.erl"},{line,105}]}]}}</pre><P><HR>
<ADDRESS>mochiweb+webmachine web server</ADDRESS></body></html>
After some more investigation into the issue, I pulled the logs on one of the riak nodes and saw this:
2017-12-10 03:54:58.654 [error]
<0.24342.271>#yz_solrq_helper:send_solr_ops_for_entries:301 Updating a
batch of Solr operations failed for index <<"attachment">> with error
{error,{other,{ok,"500",[{"Content-Type","application/json;
charset=UTF-8"},{"Transfer-Encoding","chunked"}],<<"
{\"responseHeader\":{\"status\":500,\"QTime\":1}, error": {
"msg ": "Exception writing document id 1*default*Attachment*07d8e24-dc32s-11e7-q9640-a7f8b4edb446*623 to the index; possible analysis error.",
"trace": ""org.apache.solr.common.So lrException: Exception writing document id 1*default*Attachment*07d8e24-dc32s-11e7-q9640-a7f8b4edb446*623 to the index; possible analysis error.
at org.apache.solr.update. DirectUpdateHandler2.addDoc(DirectUpdateHandler2.java:169)
at org.apache.solr.update.processor.RunUpdateProcessor.processAdd(RunUpdateProcessorFactory.java:69)
at or g.apache.solr.update.processor.UpdateRequestProcessor.processAdd(UpdateRequestProcessor.java:51)
at org.apache.solr.update.processor.DistributedUpdateProcessor.versionAdd (DistributedUpdateProcessor.java:952)
at org.apache.solr.update.processor.DistributedUpdateProcessor.processAdd(DistributedUpdateProcessor.java:692)
at org.apache.so lr.handler.loader.JsonLoader$SingleThreadedJsonLoader.processUpdate(JsonLoader.java:141)
at org.apache.solr.handler.loader.JsonLoader$SingleThreadedJsonLoader.load(JsonLo ader.java:106)
at org.apache.solr.handler.loader.JsonLoader.load(JsonLoader.java:68)
at org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java :99)
at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
at org.apache.solr.handler.RequestHandlerBase.handleRequ est(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1976)
at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilte r.java:777)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFi lter.java:207)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(Servle tHandler.java:455)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHand ler.java:557)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(Contex tHandler.java:1075)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHa ndler.java:193)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedH andler.java:135)
at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at org.eclipse.jetty.server.handler.HandlerCo llection.handle(HandlerCollection.java:154)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.han dle(Server.java:368)
at org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at org.eclipse.jetty.server.BlockingHttpConnec tion.handleRequest(BlockingHttpConnection.java:53)
at org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953)
at org.eclipse.jetty.s erver.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)
at org.ecli pse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at org.eclips e.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java :608)
at org.eclip..." >>
}
}
}
After some googling some people said this error is
Caused by: java.lang.OutOfMemoryError: Java heap space
So I modified my riak config from:
search.solr.jvm_options = -d64 -Xms4g -Xmx4g -XX:+UseStringCache -XX:+UseCompressedOops
To:
search.solr.jvm_options = -d64 -Xms8g -Xmx8g -XX:+UseStringCache -XX:+UseCompressedOops
However I am still getting the same error any help is much appreciated.

Play Framework running Asynchronous Web Calls times out

I have to perform three HTTP requests from a Play Application. These calls are directed towards three subprojects of the main application. The architecture looks like this:
main app
|
--modules
|
--component1
|
--component2
|
--component3
each component* is an individual sbt subprojects.
The main class in main app runs this Action:
def service = Action.async(BodyParsers.parse.json) { implicit request =>
val query = request.body
val url1 = "http://localhost:9000/one"
val url2 = "http://localhost:9000/two"
val url3 = "http://localhost:9000/three"
val sync_calls = for {
a <- ws.url(url1).withRequestTimeout(Duration.Inf).withHeaders("Content-Type"->"application/json")
.withBody(query).get()
b <- ws.url(url2).withRequestTimeout(Duration.Inf).withHeaders("Content-Type"->"application/json")
.withBody(a.body).get()
c <- ws.url(url3).withRequestTimeout(Duration.Inf).withHeaders("Content-Type"->"application/json")
.withBody(b.body).get()
} yield c.body
sync_calls.map(x => Ok(x))
}
The components need to be activated one after the other, so they need to be a sequence. Each of the does a spark job. However when I call the service action I get this error:
[error] application -
! #71og96ol6 - Internal server error, for (GET) [/automatic] ->
play.api.http.HttpErrorHandlerExceptions$$anon$1: Execution exception[[TimeoutException: Read timeout to localhost/127.0.0.1:9000 after 120000 ms]] [TimeoutException: Read timeout to localhost/127.0.0.1:9000 after 120000 ms]
at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:280)
at play.api.http.DefaultHttpErrorHandler.onServerError(HttpErrorHandler.scala:206)
at play.api.GlobalSettings$class.onError(GlobalSettings.scala:160)
at play.api.DefaultGlobal$.onError(GlobalSettings.scala:188)
at play.api.http.GlobalSettingsHttpErrorHandler.onServerError(HttpErrorHandler.scala:98) │
at play.core.server.netty.PlayRequestHandler$$anonfun$2$$anonfun$apply$1.applyOrElse(PlayRequestHandler.scala:1│
00)
at play.core.server.netty.PlayRequestHandler$$anonfun$2$$anonfun$apply$1.applyOrElse(PlayRequestHandler.scala:9│
9)
at scala.concurrent.Future$$anonfun$recoverWith$1.apply(Future.scala:346)
at scala.concurrent.Future$$anonfun$recoverWith$1.apply(Future.scala:345)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
Caused by: java.util.concurrent.TimeoutException: Read timeout to localhost/127.0.0.1:9000 after 120000 ms
at org.asynchttpclient.netty.timeout.TimeoutTimerTask.expire(TimeoutTimerTask.java:43)
at org.asynchttpclient.netty.timeout.ReadTimeoutTimerTask.run(ReadTimeoutTimerTask.java:54)
at io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:581)
at io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:655)
at io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:367)
at java.lang.Thread.run(Thread.java:745)
and
HTTP/1.1 500 Internal Server Error
Content-Length: 4931
Content-Type: text/html; charset=utf-8
Date: Tue, 25 Oct 2016 21:06:17 GMT
<body id="play-error-page">
<p id="detail" class="pre">[TimeoutException: Read timeout to localhost/127.0.0.1:9000 after 120000 ms]</p>
</body>
I specifically set the Timeout for each call to Duration.Inf for the purpose of avoiding timouts. Why is this happening and how do I fix it?

Swift_Mailer auth and send successfull but no email

I'm working on a Symfony project, and I'm trying to send an email from a custom SMTP (from o2switch) with Swift Mailer.
My parameters.yml is configured with good values, and when I try to send an email I can retrieve this log :
++ Starting Swift_SmtpTransport<br />
<< 220-tournevis.o2switch.net ESMTP Exim 4.87 #1 Mon, 23 May 2016 17:27:32 +0200
220-We do not authorize the use of this system to transport unsolicited,
220 and/or bulk e-mail.
>> EHLO MY_IP
<< 250-MY_CUSTOM_SMTP Hello MY_IP [MY_IP]
250-SIZE 52428800
250-8BITMIME
250-PIPELINING
250-AUTH PLAIN LOGIN
250-STARTTLS
250 HELP
>> AUTH LOGIN
<< 334 VXNlcm5hbWU6
>> cC5jb2xsaW5hQG15YmV0ZnJpZW5kLmZy
<< 334 UGFzc3dvcmQ6
>> Nk9oVGJIa1ZwSH4t
<< 235 Authentication succeeded
++ Swift_SmtpTransport started<br />
>> MAIL FROM:<MY_EMAIL_FROM#EMAIL.FR>
<< 250 OK
>> RCPT TO:<MY_EMAIL#gmail.com>
<< 250 Accepted
>> DATA
<< 354 Enter message, ending with "." on a line by itself
>>
.
<< 250 OK id=1b4rlU-004H3I-HE
Successfull.++ Stopping Swift_SmtpTransport<br />
>> QUIT
<< 221 MY_CUSTOM_SMTP closing connection
++ Swift_SmtpTransport stopped<br />
So I can red auth login is OK, sending is OK, but I never receive any email..
Any ideas ? My problem is from Symfony ? My custom SMTP ? My server ? Thanks !
You can try with mailtrap.io and discard server problems with the sending.
In past i have some troubles sending from a command because the transport doesn't close never and the server close the socket and i didn't have any error.

Resources