ColdFusion 2018 and BlazeDS DateTime Parse Error for Three Char Daylight Saving Time Code - apache-flex

When using BlazeDS (Flex app) to send dates to CF, and the date sent is within Daylight Saving Time, CF fails with an error:
[BlazeDS] Error deserializing client message.
coldfusion.runtime.locale.CFLocaleBase$InvalidDateTimeException: July 8, 2016 6:00:00 PM EDT is an invalid date or time string.
My guess is that this is likely caused by CF 2018 using Java 10 which has an issue in CLDR using three character time zone formats. (We are US shop). Even having the standard JVM switch -Djava.locale.providers=COMPAT,SPI it does not work, fails every time.
Does anyone have any ideas how to resolve? I am about to try using a separate JVM as a test, but not sure whether that will work or not. I suspect BlazeDS is not playing nicely with JVM (using its own, maybe?)
Here is a zip file containing sample project, see "ADDITIONAL" sub-folder for logs, screen shot of proxy AMF dump, etc. Copy the Additional -> remotingDateTest folder to your web root and adjust the RemoteObject in the project application file... https://www.dropbox.com/s/xte4bqrkp7loefi/Remoting%20Test.zip?dl=0

Adobe actually provided me with the answer: add the following to my JVM args (it works!):
-Duser.timezone=NewYork

Related

IIS sending 1 less second in last-modified reaponse header parameter

We have an IIS server with having web service setup. When user call the web service, We are fetching a value from DATETIME field from database. It's something like
After fetching value from database, the c# code is converting it with below code
LastModified = DateTime.SpecifyKind(rs.GetDateTime(1), DateTimeKind.Utc) // Here rs.GetDateTime(1) = 2022-04-06 11:46:45.000
The code is then adding this value to the response. header.last-modified parameter. When we call this API from the client end, it shows 2022-04-06 11:46:44.000 GMT (1 less second)
Checked everything but did not find anything. Even server time is also up to date, no issues on IIS setup, timezone is UTC and seems to be synced.
Even we have the same setup on another server where its working fine without 1 less second issue.
When I run the command from the server, It's showing the correct time. That means I doubt that in between may be IIS in reducing the time by 1 second from that header parameter called last-modified.

BizTalk WCF-Custom sqlbinding not sending parameters to stored procedure

I am redeploying an old Biztalk 2010 project to a new Biztalk 2016 server. The code hasn't changed at all - as far as I can tell - so it should just be a straight deployment. I am very much a BizTalk newbie but I have managed to get it all set up. However, when I send a message it works through the orchestration until it gets to the point where it should call a stored proc - which it does - but it doesn't send the parameters. There is an audit step beforehand so I can see that it has generated the message with the values and the schema for the stored proc is there but I can't see why it isn't sending the values. Anyone have any idea why?
I have checked every setting against a live-running version and it all seems to be the same. I'm tearing my hair out and I have precious little to spare.

APIC 2018.3.7 OVA: The Assembly part of API is not being deployed - has reverted to a much earlier version?

As of yesterday, when I publish the (current) Product and its API, the deployed Assembly is not updated and what is running is from an earlier state - most likely from early December. The APIC domain was created at the end of Nov, so what I is deployed could even be the initial deployment.
As a test, I changed the API's description (add 'XXX') and changed a Gateway script to add XXX to a 'console.warn' at the start of the Assembly. The description change can be seen in Portal, but there is no 'XXX' visible in the DP log. I set DP log level to 'debug', but none of the 'tracing' statements added in Dec can be seen.
Does anyone have any ideas as to how I can resolve this? Or, how can I see the API's deployed code? I've looked in the DP File Management, but everything is dated at then end of Nov.
[EDIT]
Catalogs are in Dev mode, so I change and publish using same version. I have just done a series of tests using the api referred to above (VAT-Num-Check) (which is our first 'real' api) and an older trivial one that just divides two numbers. The Sandbox catalog is associated with the DEV gateway, and the SIT catalog, with the SIT gateway.
The other kind of corruption is, after deploying a new api, calling it results in 404 'No resources match requested URI'.
My conclusions are:
Something has broken in Mgmt server and/or DP APiC Gateway. Once code has been deployed to DP, it can't be changed or deleted. Changes in Portal are correct.
The possible exception is that the deployment of the VAT-Num-check API appears to have reverted to an earlier version after a CLI publish to the SIT Gateway this morning.
[/EDIT]
Background:
I have been creating a Windows script to publish draft Product/APIs and then run Postman tests. This means that I have been performing a lot of publish actions to DP (V5 type). On Monday evening, in my last run, the Postman tests all worked. Yesterday morning, some failed.
Back in early Dec, I made a change so that all JSON error messages in user responses used error as the 'prefix' to the message contents. Before that, some used message and some used reply. The reason for the failures is error messages have reverted to using the earlier 'prefix'.
API Connect 2018.3.7 went out of support on November 15, 2018. You'd need to upgrade to 2018.4.1.x, which will be supported for a longer term.
If you still have the issue at that point, then please open a support ticket for further investigation.

Microsoft Translator oAuth call for token generation timeout

I've deployed a Microsoft-Cognitive Translator Text API environment in Azure. I've been following the documentation, and landed on the Getting a token on this interactive page section. Inserting my private key in the field has the process wait very long and fail, most of the time. I was able to get a result a few times, but it's very rare.
Using the command line curls, I've been getting 500s, or SSLRead errors. Is the service down ATM, or was it moved to somewhere else ?
See https://azure.microsoft.com/en-us/status/history/:
2/14 Cognitive Services | Resolved
Summary of impact: Between as early as 22:00 UTC on 13 Feb 2017 and
4:00 UTC on 14 Feb 2017, a subset of customers using Cognitive
Services may have received intermittent timeouts or errors when making
API requests or generating tokens for their Cognitive Services.
Preliminary root cause: At this stage Engineers do not have a
definitive root cause.
Mitigation: Engineers scaled out the service in order to mitigate.
Next steps: Engineers will continue to investigate to establish the
full root cause.

Issue running ASPX page using Scheduled Task

I have a scheduled task set up to run Scan.aspx every 3 minutes in IE7. Scan.aspx reads data from 10 files in sequence. These files are constantly being updated. The values from the file are inserted into a database.
Sporadically, the value being read is truncated or distorted. For example, if the value in the file was "Hello World", random entries such as "Hello W", "Hel", etc. will be in the database. The timestamps on these entries appear completely random. Sometimes at 1:00 am, sometimes at 3:30 am. And some nights, this doesn't occur at all.
I'm unable to reproduce this issue when I debug the code. So I know under "normal" circumstances, the code executes correctly.
UPDATE:
Here is the aspx codebehind (in Page_Load) to read a text file (this is called for each of the 10 text files):
Dim filename As String = location
If File.Exists(filename) Then
Using MyParser As New FileIO.TextFieldParser(filename)
MyParser.TextFieldType = FileIO.FieldType.Delimited
MyParser.SetDelimiters("~")
Dim currentrow As String()
Dim valueA, valueB As String
While Not MyParser.EndOfData
Try
currentrow = MyParser.ReadFields()
valueA= currentrow(0).ToUpper
valueB = currentrow(1).ToUpper
//insert values as record into DB if does not exist already
Catch ex As Exception
End Try
End While
End Using
End If
Any ideas why this might cause issues when running multiple times throughout the day (via scheduled task)?
First implement a Logger such as Log4Net in your ASP.NET solution and Log method entry and exit points in your Scan.aspx as well as your method for updating the DB. There is a chance this may provide some hint of what is going on. You should also check the System Event Log to see if any other event is associated with your failed DB entries.
ASP.NET is not the best thing for this scenario especially when paired with a Windows scheduled task; this is not a robust design. A more robust system would run on a timer inside a Windows-Service-Application. Your code for reading the files and updating to the DB could be ported across. If you have access to the server and can install a Windows Service, make sure you also add Logging to the Windows Service too!
Make sure you read the How to Debug below
Windows Service Applications intro on MSDN: has further links to:
How to: Create Windows Services
How to: Install and Uninstall Services
How to: Start Services
How to: Debug Windows Service Applications]
Walkthrough: Creating a Windows Service
Application in the Component Designer
How to: Add Installers to Your Service Application
Regarding your follow up comment about the apparent random entries that sometimes occur at 1am and 3.30am: you should:
Investigate the IIS Log for the site when these occur and find out what hit(visited) the page at that time.
Check if there is an indexing service on the server which is visiting your aspx page.
Check if Anti-Virus software is installed and ascertain if this is visiting your aspx page or impacting the Asp.Net cache; this can cause compilation issues such as file-locks on the aspnet page in the aspnet cache; (a scenario for aspnet websites as opposed to aspnet web applications) which could give weird behavior.
Find out if the truncated entries coincide with the time that the files are updated: cross reference your db entries timestamp or logger timestamp with the time the files are updated.
Update your logger to log the entire contents of the file being read to verify you've not got a 'junk-in > junk-out' scenario. Be careful with diskspace on the server by running this for one night.
Find out when the App-Pool that your web app runs under is recycled and cross reference this with the time of your truncated entries; you can do this with web.config only via ASP.NET Health Monitoring.
Your code is written with a 'try catch' that will bury errors. If you are not going to do something useful with your caught error then do not catch it. Handle your edge cases in code, not a try catch.
See this try-catch question on this site.

Resources