SMPP message length error using jamaa more than 160 characters - asp.net

I am using jamaa-smpp to send sms. But I am not able to send more than 160 characters using the api. I am using the following link http://jamaasmpp.codeplex.com/
The code is as follows
SmppConnectionProperties properties = _client.Properties;
properties.SystemID = "test";
properties.Password = "test1";
properties.Port = 101; //IP port to use
properties.Host = "..."; //SMSC host name or IP Address
....
Is it possible to send more than 160 character using that API.

I found the solution it was there on the website discussion. I am posting it here so that one can find solution here.
I replaced the existing function in the TextMessage.cs (JamaaTech.Smpp.Net.Client).
The function name is IEnumerable<SendSmPDU> GetPDUs(DataCoding defaultEncoding)
//protected override IEnumerable<SendSmPDU> GetPDUs(DataCoding defaultEncoding)
//{
// //This smpp implementation does not support sending concatenated messages,
// //however, concatenated messages are supported on the receiving side.
// int maxLength = GetMaxMessageLength(defaultEncoding, false);
// byte[] bytes = SMPPEncodingUtil.GetBytesFromString(vText, defaultEncoding);
// //Check message size
// if(bytes.Length > maxLength)
// {
// throw new InvalidOperationException(string.Format(
// "Encoding '{0}' does not support messages of length greater than '{1}' charactors",
// defaultEncoding, maxLength));
// }
// SubmitSm sm = new SubmitSm();
// sm.SetMessageBytes(bytes);
// sm.SourceAddress.Address = vSourceAddress;
// sm.DestinationAddress.Address = vDestinatinoAddress;
// sm.DataCoding = defaultEncoding;
// if (vRegisterDeliveryNotification) { sm.RegisteredDelivery = RegisteredDelivery.DeliveryReceipt; }
// yield return sm;
//}
protected override IEnumerable<SendSmPDU> GetPDUs(DataCoding defaultEncoding)
{
SubmitSm sm = new SubmitSm();
sm.SourceAddress.Address = vSourceAddress;
sm.DestinationAddress.Address = vDestinatinoAddress;
sm.DataCoding = defaultEncoding;
if (vRegisterDeliveryNotification)
sm.RegisteredDelivery = RegisteredDelivery.DeliveryReceipt;
int maxLength = GetMaxMessageLength(defaultEncoding, false);
byte[] bytes = SMPPEncodingUtil.GetBytesFromString(vText, defaultEncoding);
if (bytes.Length > maxLength)
{
var SegID = new Random().Next(1000, 9999);
var messages = Split(vText, GetMaxMessageLength(defaultEncoding, true));
var totalSegments = messages.Count;
var udh = new Udh(SegID, totalSegments, 0);
for (int i = 0; i < totalSegments; i++)
{
udh.MessageSequence = i + 1;
sm.Header.SequenceNumber = PDUHeader.GetNextSequenceNumber();
sm.SetMessageText(messages[i], defaultEncoding, udh);
yield return sm;
}
}
else
{
sm.SetMessageBytes(bytes);
yield return sm;
}
}
private static List<String> Split(string message, int maxPartLength)
{
var result = new List<String>();
for (int i = 0; i < message.Length; i += maxPartLength)
{
var chunkSize = i + maxPartLength < message.Length ? maxPartLength : message.Length - i;
var chunk = new char[chunkSize];
message.CopyTo(i, chunk, 0, chunkSize);
result.Add(new string(chunk));
}
return result;
}

Related

Decompress stream from HttpClient using SharpZipLib in Xamarin.Forms

I am trying decompress stream from HttpClient using SharpZipLib in Xamarin.Forms. This code perfectly works on iOS, but on Android CanDecompressEntry() always returns false. What i'm missing? Maybe Android need some permissions?
var zipStream = await httpClient.GetStreamAsync(url);
using (ZipInputStream s = new ZipInputStream(zipStream))
{
ZipEntry theEntry;
//const int size = 2048;
byte[] data = new byte[2048];
Debug.WriteLine(s.CanDecompressEntry);
while ((theEntry = s.GetNextEntry()) != null)
{
if (theEntry.IsFile)
{
string str = "";
while (true)
{
int size = s.Read(data, 0, data.Length);
if (size > 0)
{
str += new UTF8Encoding().GetString(data, 0, size);
}
else
{
files.Add(theEntry.Name.Substring(0,theEntry.Name.Length-5), str);
break;
}
}
}
}
}
return files;
Ok. I just set ConfigureAwait to false, and it works.
var zipStream = await httpClient.GetStreamAsync(url).ConfigureAwait(false);

GWT read mime type client side

I'm trying to read the mime type in GWT client side in order to validate a file before upload it. To do this I use JSNI to read the file header using HTML5 filereader API. However my problem is that GWT does not wait for the result of the reading and continue the code execution. The side effect is that my boolean is not set yet and my condition goes wrong. Is there any mechanism like promise implemented in GWT?
Any help on this would be much appreciated!
UploadImageButtonWidget.java
private boolean isMimeTypeValid = false;
private String mimeType = null;
public native boolean isValid(Element element)/*-{
var widget = this;
var files = element.files;
var reader = new FileReader();
var CountdownLatch = function (limit){
this.limit = limit;
this.count = 0;
this.waitBlock = function (){};
};
CountdownLatch.prototype.countDown = function (){
this.count = this.count + 1;
if(this.limit <= this.count){
return this.waitBlock();
}
};
CountdownLatch.prototype.await = function(callback){
this.waitBlock = callback;
};
var barrier = new CountdownLatch(1);
reader.readAsArrayBuffer(files[0]);
reader.onloadend = function(e) {
var arr = (new Uint8Array(e.target.result)).subarray(0, 4);
var header = "";
for (var i = 0; i < arr.length; i++) {
header += arr[i].toString(16);
}
widget.#com.portal.client.widgets.base.UploadImageButtonWidget::setMimeType(Ljava/lang/String;)(header);
barrier.countDown();
}
return barrier.await(function(){
return widget.#com.portal.client.widgets.base.UploadImageButtonWidget::isMimeTypeValid();
});
}-*/
public void setMimeType(String headerString) {
boolean mimeValid = true;
if (headerString.equalsIgnoreCase(PNG_HEADER)) {
mimeType = PNG_MIMETYPE;
} else if (headerString.equalsIgnoreCase(GIF_HEADER)) {
mimeType = GIF_MIMETYPE;
} else if (headerString.equalsIgnoreCase(JPG_HEADER1) || headerString.equalsIgnoreCase(JPG_HEADER2) || headerString.equalsIgnoreCase(JPG_HEADER3)) {
mimeType = JPG_MIMETYPE;
} else {
mimeValid = false;
setValidationError(i18n.uploadErrorNotImageBasedOnMimeType());
fileChooser.getElement().setPropertyJSO("files", null);
setErrorStatus();
}
setMimeTypeValid(mimeValid);
}
public boolean isMimeTypeValid() {
GWT.log("mimeType" + mimeType);
GWT.log("isMimetypeValid" + String.valueOf(isMimeTypeValid));
return mimeType != null;
}
in the activity:
public void validateAndUpload() {
UploadImageButtonWidget uploadImageButtonWidget = view.getUpload();
if (uploadImageButtonWidget.isValid()) {
GWT.log("mime ok: will be uploaded");
uploadImage();
} else {
GWT.log("mime not ok: will not be uploaded");
}
}

Unable to run second WebClient request after timed out and aborting request

I have a desktop app which is downloading 1 or more small files (jpg with less than 400KB in size and no more than 20 at a time) simultaneously using a CustomWebClient object and calling OpenReadAsync(). The download process is working just fine if there is no problem in the process. I want to limit the response to a certain time (15 sec) so I have introduced a timeOut handling which is Aborting the request. Even the timeout is working and after that my “OpenReadCompletedEventHandler” method is receiving System.Net.WebException: The request was aborted: The request was canceled (which is the right behaviour).
Now, my problem is that I want to allow the user to try re-loading the picture(s). So the next webClient request(s) are failing with the same WebException. Below is my code.
Here is my Custom WebClient class (used in order to have more than 2 async connections at a time):
internal class ExtendedWebClient : WebClient
{
private Timer _timer;
public int ConnectionLimit { get; set; }
public int ConnectionTimeout { get; set; }
public ExtendedWebClient()
{
this.ConnectionLimit = 2;
}
protected override WebRequest GetWebRequest(Uri address)
{
var request = base.GetWebRequest(address) as HttpWebRequest;
if (request != null){_timer = new Timer(TimeoutRequest, request, ConnectionTimeout, Timeout.Infinite);
request.ServicePoint.ConnectionLimit = this.ConnectionLimit;
request.ServicePoint.MaxIdleTime = 5000;
request.ServicePoint.ConnectionLeaseTimeout = 5000;
}
return request;
}
private void TimeoutRequest(object state)
{
_timer.Dispose();
_timer = null;
((WebRequest)state).Abort();
}
protected override void Dispose(bool disposing)
{
if (_timer != null)
{
_timer.Dispose();
_timer = null;
}
base.Dispose(disposing);
}
}
Here is the code to download the files using my custom WebClient class:
internal struct PageWaitHandleState
{
public int WaitHandleIndexInPage;
public bool ImageIsLoaded;
public string ErrMessage;
}
public Image[] downloadedImages;
private PageWaitHandleState[] waitHandlesInPage;
private OpenReadCompletedEventHandler[] downloadComplete;
private EventWaitHandle[] pagesEWH = null;
private EventWaitHandle[] downloadImageEvent;
private int availableImages = 1; // Set here to simplify, but as I stated in my description, it may be more than 1.
int downloadTimeOut = 15000;
int maxSimultaneousDownloads = 20;
private void DownloadImages(int pageIndex = 0, string[] imageUrl)
{
if (pagesEWH[pageIndex] != null)
{
ReloadImages(pageIndex, imageUrl); // Executed in the second request
return;
else
{
pagesEWH[pageIndex] = new EventWaitHandle[availableImages];
downloadedImages = new Image[availableImages];
downloadComplete = new OpenReadCompletedEventHandler[availableImages];
downloadImageEvent = new EventWaitHandle[availableImages];
waitHandlesInPage = new PageWaitHandleState[availableImages];
// Set the downloadComplete deletages
for (int i = 0; i < availableImages; i++)
{
downloadComplete[i] = ProcessImage;
}
}
for (int imgCounter = 0; i < availableImages; i++)
{
waitHandlesInPage[imgCounter] = new PageWaitHandleState() { ImageIsLoaded = false, WaitHandleIndexInPage = imgCounter, ErrMessage = null };
downloadImageEvent[imgCounter] = GrabImageAsync(imageUrl[imgCounter], downloadComplete[imgCounter], imgCounter, downloadTimeOut, maxSimultaneousDownloads);
pagesEWH[imgCounter] = downloadImageEvent[imgCounter];
}
offenderIndex++;
}
}
private static EventWaitHandle GrabImageAsync(string url, OpenReadCompletedEventHandler openReadCompletedEventHandler, int imgCounter, int downloadTimeOut, int maxSimultaneousDownloads)
{
var myClient = new ExtendedWebClient();
myClient.ConnectionLimit = maxSimultaneousDownloads;
myClient.ConnectionTimeout = downloadTimeOut;
myClient.OpenReadCompleted += openReadCompletedEventHandler;
var iewh = new ImageEventWaitHandle() { ewh = new EventWaitHandle(false, EventResetMode.ManualReset), ImageIndex = imgCounter };
myClient.OpenReadAsync(new Uri(url), iewh);
return iewh.ewh;
}
internal void ProcessImage(object sender, OpenReadCompletedEventArgs e)
{
ImageEventWaitHandle iewh = (ImageEventWaitHandle)e.UserState;
bool disposeObject = false;
try
{
if (e.Cancelled)
{
this.waitHandlesInPage[iewh.ImageIndex].ImageIsLoaded = false;
this.waitHandlesInPage[iewh.ImageIndex].ErrMessage = "WebClient request was cancelled";
}
else if (e.Error != null)
{
this.waitHandlesInPage[iewh.ImageIndex].ImageIsLoaded = false;
this.waitHandlesInPage[iewh.ImageIndex].ErrMessage = e.Error.Message;
iewh.ewh.Set();
this.downloadImageEvent[iewh.ImageIndex].Close();
}
else
{
using (Stream inputStream = e.Result)
using (MemoryStream ms = new MemoryStream())
{
byte[] buffer = new byte[4096];
int bytesRead;
int totalReadBytes = 0;
do
{
bytesRead = inputStream.Read(buffer, 0, buffer.Length); // Exception fired here with the second request
ms.Write(buffer, 0, bytesRead);
totalReadBytes += bytesRead;
} while (inputStream.CanRead && bytesRead > 0);
this.downloadedImages[iewh.ImageIndex] = Image.FromStream(ms);
this.waitHandlesInPage[iewh.ImageIndex].ImageIsLoaded = true;
this.waitHandlesInPage[iewh.ImageIndex].ErrMessage = null;
}
disposeObject = true;
}
}
catch (Exception exc)
{
this.downloadedImages[iewh.ImageIndex] = null;
}
finally
{
// Signal the wait handle
if (disposeObject)
{
iewh.ewh.Set();
((WebClient)sender).Dispose();
}
}
}
private void ReloadImages(int pageIndex, string[] imageUrl)
{
for (int imgCounter = 0; imgCounter < availableImages; imgCounter++)
{
this.downloadComplete[imgCounter] = this.ProcessImage;
this.waitHandlesInPage[imgCounter] = new PageWaitHandleState() { ImageIsLoaded = false, WaitHandleIndexInPage = imgCounter, ErrMessage = null };
this.downloadImageEvent[imgCounter] = GrabImageAsync(ImageUrl[imgCounter],this.downloadComplete[imgCounter], imgCounter, downloadTimeOut, maxSimultaneousDownloads);
this.pagesEWH[imgCounter] = this.downloadImageEvent[imgCounter];
}
}
Finally, when I want to access the images I check if they are ready by using:
private bool ImagesInPageReady(int pageIndex, int recordsInCurrentPage)
{
if (_PagesEWH[pageIndex] != null)
{
int completedDownloadsCount = 0;
bool waitHandleSet;
// Wait for the default images first (imgCounter = 0). When moving page or asking for more pictures, then wait for the others.
for (int ewhIndexInPage = 0; ewhIndexInPage < recordsInCurrentPage; ewhIndexInPage++)
{
if (this.pagesEWH[ewhIndexInPage].WaitOne(this.downloadTimeOut))
{
if (this.WaitHandlesInPage[ewhIndexInPage].ImageIsLoaded)
{
completedDownloadsCount++;
}
}
else
{
this.pagesEWH[ewhIndexInPage].Set();
}
}
return (completedDownloadsCount > 0);
}
return false;
}
#usr, thanks for pointing me in the right direction. HttpClient was the solution. So I basically encapsulated my HttpClient object in a new class, together with the ProcessImage() method and exposing and event fired by the same method.

How to signal waithandle.waitany when doing async requests?

I'm using the following sample code to fetch some HTML Pages using async requests.
I don't want to wait until every request is completed that is using WaitHandle.WaitAll, just until the correct value is found. I'm currently doing it this way, but it feels wrong to send ManualResetEvents to the thread. Is it how it should be done? Is there a better way?
public static void runprogram()
{
System.Net.ServicePointManager.DefaultConnectionLimit = 20;
FetchPageDelegate del = new FetchPageDelegate(FetchPage);
List<HtmlDocument> htmllist = new List<HtmlDocument>();
List<IAsyncResult> results = new List<IAsyncResult>();
List<WaitHandle> waitHandles = new List<WaitHandle>();
List<ManualResetEvent> handles = new List<ManualResetEvent>();
for (int i = 0; i < 20; i++)
{
ManualResetEvent e = new ManualResetEvent(false);
handles.Add(e);
}
for(int i = 0; i < 200; i += 10)
{
int y = 0;
string url = #"URLTOPARSE" + i;
IAsyncResult result = del.BeginInvoke(url, handles[y], null, null);
results.Add(result);
waitHandles.Add(result.AsyncWaitHandle);
y++;
}
//Here i check for a signal
WaitHandle.WaitAny(handles.ToArray());
//WaitHandle.WaitAll(waitHandles.ToArray());
foreach (IAsyncResult async in results)
{
FetchPageDelegate delle = (async as AsyncResult).AsyncDelegate as FetchPageDelegate;
HtmlDocument htm = delle.EndInvoke(async);
if(htm.DocumentNode.InnerHtml.Contains("ANYTHING TO CHECK FOR(ONLY A TEST"))
{
return;
}
}
}

repeatedly call AddImageUrl(url) to assemble pdf document

I'm using abcpdf and I'm curious if we can we recursively call AddImageUrl() function to assemble pdf document that compile multiple urls?
something like:
int pageCount = 0;
int theId = theDoc.AddImageUrl("http://stackoverflow.com/search?q=abcpdf+footer+page+x+out+of+", true, 0, true);
//assemble document
while (theDoc.Chainable(theId))
{
theDoc.Page = theDoc.AddPage();
theId = theDoc.AddImageToChain(theId);
}
pageCount = theDoc.PageCount;
Console.WriteLine("1 document page count:" + pageCount);
//Flatten document
for (int i = 1; i <= pageCount; i++)
{
theDoc.PageNumber = i;
theDoc.Flatten();
}
//now try again
theId = theDoc.AddImageUrl("http://stackoverflow.com/questions/1980890/pdf-report-generation", true, 0, true);
//assemble document
while (theDoc.Chainable(theId))
{
theDoc.Page = theDoc.AddPage();
theId = theDoc.AddImageToChain(theId);
}
Console.WriteLine("2 document page count:" + theDoc.PageCount);
//Flatten document
for (int i = pageCount + 1; i <= theDoc.PageCount; i++)
{
theDoc.PageNumber = i;
theDoc.Flatten();
}
pageCount = theDoc.PageCount;
edit:
code that seems to work based on 'hunter' solution:
static void Main(string[] args)
{
Test2();
}
static void Test2()
{
Doc theDoc = new Doc();
// Set minimum number of items a page of HTML should contain.
theDoc.HtmlOptions.ContentCount = 10;// Otherwise the page will be assumed to be invalid.
theDoc.HtmlOptions.RetryCount = 10; // Try to obtain html page 10 times
theDoc.HtmlOptions.Timeout = 180000;// The page must be obtained in less then 10 seconds
theDoc.Rect.Inset(0, 10); // set up document
theDoc.Rect.Position(5, 15);
theDoc.Rect.Width = 602;
theDoc.Rect.Height = 767;
theDoc.HtmlOptions.PageCacheEnabled = false;
IList<string> urls = new List<string>();
urls.Add("http://stackoverflow.com/search?q=abcpdf+footer+page+x+out+of+");
urls.Add("http://stackoverflow.com/questions/1980890/pdf-report-generation");
urls.Add("http://yahoo.com");
urls.Add("http://stackoverflow.com/questions/4338364/recursively-call-addimageurlurl-to-assemble-pdf-document");
foreach (string url in urls)
AddImage(ref theDoc, url);
//Flatten document
for (int i = 1; i <= theDoc.PageCount; i++)
{
theDoc.PageNumber = i;
theDoc.Flatten();
}
theDoc.Save("batchReport.pdf");
theDoc.Clear();
Console.Read();
}
static void AddImage(ref Doc theDoc, string url)
{
int theId = theDoc.AddImageUrl(url, true, 0, true);
while (theDoc.Chainable(theId))
{
theDoc.Page = theDoc.AddPage();
theId = theDoc.AddImageToChain(theId); // is this right?
}
Console.WriteLine(string.Format("document page count: {0}", theDoc.PageCount.ToString()));
}
edit 2:unfortunately calling AddImageUrl multiple times when generating pdf documents doesn't seem to work...
Finally found reliable solution.
Instead of executing AddImageUrl() function on the same underlying document, we should execute AddImageUrl() function on it's own Doc document and build collection of documents that at the end we will assemble into one document using Append() method.
Here is the code:
static void Main(string[] args)
{
Test2();
}
static void Test2()
{
Doc theDoc = new Doc();
var urls = new Dictionary<int, string>();
urls.Add(1, "http://www.asp101.com/samples/server_execute_aspx.asp");
urls.Add(2, "http://stackoverflow.com/questions/4338364/repeatedly-call-addimageurlurl-to-assemble-pdf-document");
urls.Add(3, "http://www.google.ca/");
urls.Add(4, "http://ca.yahoo.com/?p=us");
var theDocs = new List<Doc>();
foreach (int key in urls.Keys)
theDocs.Add(GetReport(urls[key]));
foreach (var doc in theDocs)
{
if (theDocs.IndexOf(doc) == 0)
theDoc = doc;
else
theDoc.Append(doc);
}
theDoc.Save("batchReport.pdf");
theDoc.Clear();
Console.Read();
}
static Doc GetReport(string url)
{
Doc theDoc = new Doc();
// Set minimum number of items a page of HTML should contain.
theDoc.HtmlOptions.ContentCount = 10;// Otherwise the page will be assumed to be invalid.
theDoc.HtmlOptions.RetryCount = 10; // Try to obtain html page 10 times
theDoc.HtmlOptions.Timeout = 180000;// The page must be obtained in less then 10 seconds
theDoc.Rect.Inset(0, 10); // set up document
theDoc.Rect.Position(5, 15);
theDoc.Rect.Width = 602;
theDoc.Rect.Height = 767;
theDoc.HtmlOptions.PageCacheEnabled = false;
int theId = theDoc.AddImageUrl(url, true, 0, true);
while (theDoc.Chainable(theId))
{
theDoc.Page = theDoc.AddPage();
theId = theDoc.AddImageToChain(theId);
}
//Flatten document
for (int i = 1; i <= theDoc.PageCount; i++)
{
theDoc.PageNumber = i;
theDoc.Flatten();
}
return theDoc;
}
}

Resources