Flex application throws error when making http call (GET request) - apache-flex

I get following error occasionally while running my flex application. I dont know what is wrong as it works intermittently.
Error occured (mx.messaging.messages::ErrorMessage)#0
body = ""
clientId = "DirectHTTPChannel0"
correlationId = "ACB1E8D4-51AF-21B9-E440-8ADBA0D4301E"
destination = ""
extendedData = (null)
faultCode = "Server.Error.Request"
faultDetail = "Error: [IOErrorEvent type="ioError" bubbles=false cancelable=false eventPhase=2 text="Error #2032: Stream Error. URL: http://localhost/api/wireless/lookupNumber/6501234567/"]. URL: http://localhost/api/wireless/lookupNumber/6501234567/"
faultString = "HTTP request error"
headers = (Object)#1
DSStatusCode = 0
messageId = "21CFD7A0-209E-81E2-8416-8ADBAC4F6B69"
rootCause = (flash.events::IOErrorEvent)#2
bubbles = false
cancelable = false
currentTarget = (flash.net::URLLoader)#3
bytesLoaded = 0
bytesTotal = 0
data = ""
dataFormat = "text"
errorID = 0
eventPhase = 2
target = (flash.net::URLLoader)#3
text = "Error #2032: Stream Error. URL: http://localhost/api/wireless/lookupNumber/6501234567/"
type = "ioError"
timestamp = 0
timeToLive = 0
Can you please guide me ?
Thanks

Related

Is it possible to randomly sample YouTube comments with YouTube API V3?

I have been trying to download all the YouTube comments on popular videos using python requests, but it has been throwing up the following error after about a quarter of the total comments:
{'error': {'code': 400, 'message': "The API server failed to successfully process the request. While this can be a transient error, it usually indicates that the request's input is invalid. Check the structure of the commentThread resource in the request body to ensure that it is valid.", 'errors': [{'message': "The API server failed to successfully process the request. While this can be a transient error, it usually indicates that the request's input is invalid. Check the structure of the commentThread resource in the request body to ensure that it is valid.", 'domain': 'youtube.commentThread', 'reason': 'processingFailure', 'location': 'body', 'locationType': 'other'}]}}
I found this thread detailing the same issue, and it seems that it is not possible to download all the comments on popular videos.
This is my code:
import argparse
import urllib
import requests
import json
import time
start_time = time.time()
class YouTubeApi():
YOUTUBE_COMMENTS_URL = 'https://www.googleapis.com/youtube/v3/commentThreads'
comment_counter = 0
with open("API_keys.txt", "r") as f:
key_list = f.readlines()
key_list = [key.strip('/n') for key in key_list]
def format_comments(self, results, likes_required):
comments_list = []
try:
for item in results["items"]:
comment = item["snippet"]["topLevelComment"]
likes = comment["snippet"]["likeCount"]
if likes < likes_required:
continue
author = comment["snippet"]["authorDisplayName"]
text = comment["snippet"]["textDisplay"]
str = "Comment by {}:\n \"{}\"\n\n".format(author, text)
str = str.encode('ascii', 'replace').decode()
comments_list.append(str)
self.comment_counter += 1
print("Comments downloaded:", self.comment_counter, end="\r")
except(KeyError):
print(results)
return comments_list
def get_video_comments(self, video_id, likes_required):
with open("API_keys.txt", "r") as f:
key_list = f.readlines()
key_list = [key.strip('/n') for key in key_list]
if self.comment_counter <= 900000:
key = self.key_list[0]
elif self.comment_counter <= 1800000:
key = self.key_list[1]
elif self.comment_counter <= 2700000:
key = self.key_list[2]
elif self.comment_counter <= 3600000:
key = self.key_list[3]
elif self.comment_counter <= 4500000:
key = self.key_list[4]
params = {
'part': 'snippet,replies',
'maxResults': 100,
'videoId': video_id,
'textFormat': 'plainText',
'key': key
}
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36'
}
try:
#data = self.openURL(self.YOUTUBE_COMMENTS_URL, params)
comments_data = requests.get(self.YOUTUBE_COMMENTS_URL, params=params, headers=headers)
except ChunkedEncodingError:
tries = 5
print("Chunked Error. Retrying...")
for n in range(tries):
try:
x = 0
x += 1
print("Trying", x, "times")
response = session.post("https://www.youtube.com/comment_service_ajax", params=params, data=data, headers=headers)
comments_data = json.loads(response.text)
except ChunkedEncodingError as c:
print(c)
results = comments_data.json()
nextPageToken = results.get("nextPageToken")
commments_list = []
commments_list += self.format_comments(results, likes_required)
while nextPageToken:
params.update({'pageToken': nextPageToken})
try:
comments_data = requests.get(self.YOUTUBE_COMMENTS_URL, params=params, headers=headers)
except ChunkedEncodingError as c:
tries = 5
print("Chunked Error. Retrying...")
for n in range(tries):
try:
x = 0
x += 1
print("Trying", x, "times")
response = session.post("https://www.youtube.com/comment_service_ajax", params=params, data=data, headers=headers)
comments_data = json.loads(response.text)
except ChunkedEncodingError as c:
print(c)
results = comments_data.json()
nextPageToken = results.get("nextPageToken")
commments_list += self.format_comments(results, likes_required)
return commments_list
def get_video_id_list(self, filename):
try:
with open(filename, 'r') as file:
URL_list = file.readlines()
except FileNotFoundError:
exit("File \"" + filename + "\" not found")
list = []
for url in URL_list:
if url == "\n": # ignore empty lines
continue
if url[-1] == '\n': # delete '\n' at the end of line
url = url[:-1]
if url.find('='): # get id
id = url[url.find('=') + 1:]
list.append(id)
else:
print("Wrong URL")
return list
def main():
yt = YouTubeApi()
parser = argparse.ArgumentParser(add_help=False, description=("Download youtube comments from many videos into txt file"))
required = parser.add_argument_group("required arguments")
optional = parser.add_argument_group("optional arguments")
here: https://console.developers.google.com/apis/credentials")
optional.add_argument("--likes", '-l', help="The amount of likes a comment needs to be saved", type=int)
optional.add_argument("--input", '-i', help="URL list file name")
optional.add_argument("--output", '-o', help="Output file name")
optional.add_argument("--help", '-h', help="Help", action='help')
args = parser.parse_args()
# --------------------------------------------------------------------- #
likes = 0
if args.likes:
likes = args.likes
input_file = "URL_list.txt"
if args.input:
input_file = args.input
output_file = "Comments.txt"
if args.output:
output_file = args.output
list = yt.get_video_id_list(input_file)
if not list:
exit("No URLs in input file")
try:
vid_counter = 0
with open(output_file, "a") as f:
for video_id in list:
vid_counter += 1
print("Downloading comments for video ", vid_counter, ", id: ", video_id, sep='')
comments = yt.get_video_comments(video_id, likes)
if comments:
for comment in comments:
f.write(comment)
print('\nDone!')
except KeyboardInterrupt:
exit("User Aborted the Operation")
# --------------------------------------------------------------------- #
if __name__ == '__main__':
main()
The next best method would be to randomly sample them. Does anyone know if this is possible with the API V3?
Even if the API returns a processingFailure error, you could still catch that (or any other API error for that matter) for to terminate gracefully your pagination loop. This way your script will provide the top-level comments that it fetched from of the API prior to the occurrence of the first API error.
The error response provided by the YouTube Data API is (usually) of the following form:
{
"error": {
"errors": [
{
"domain": <string>,
"reason": <string>,
"message": <string>,
"locationType": <string>,
"location": <string>
}
],
"code": <integer>,
"message": <string>
}
}
Hence, you could have defined the following function:
def is_error_response(response):
error = response.get('error')
if error is None:
return False
print("API Error: "
f"code={error['code']} "
f"domain={error['errors'][0]['domain']} "
f"reason={error['errors'][0]['reason']} "
f"message={error['errors'][0]['message']!r}")
return True
that you'll invoke after each statement of form results = comments_data.json(). In case of the first occurrence of that statement, you'll have:
results = comments_data.json()
if is_error_response(results):
return []
nextPageToken = results.get("nextPageToken")
For the second instance of that statement:
results = comments_data.json()
if is_error_response(results):
return comments_list
nextPageToken = results.get("nextPageToken")
Notice that the function is_error_response above prints out an error message on stdout in case its argument in an API error response; this is for the purpose of having the user of your script informed about the API call failure.

Updating item in dynamo with multiple expression, can't get delete working with SET

Trying to use multiple expression SET and DELETE. everything works fine for SET, But when i add DELETE, not able to figure out right syntax.
status = "Previously Deployed version"
message = "New version deployment started"
NewVersion = "PipelineTestAPI_1.5.0"
json_ = {":val1" : status,
":val2" : message,
":val4" : NewVersion
}
dynamo_json = ast.literal_eval(d_json.dumps(json_))
json_key = {"Environment" : "PipelineTestAPI-Prod"}
dynamo_key = ast.literal_eval(d_json.dumps(json_key))
resp = dynamo.update_item(
TableName = "CICDDeployment_Tracker",
Key = dynamo_key,
UpdateExpression = "SET currentstage = : val1, message = : val2 DELETE :val4",
ExpressionAttributeValues = dynamo_json
)
print resp
Error:
Invalid UpdateExpression: Syntax error; token: ":val4", near: "DELETE
:val4"
It should be REMOVE instead of DELETE, and REMOVE attributeName not the value
UpdateExpression = "SET currentstage = : val1, message = : val2 REMOVE newVersion",

Classic ASP Base64, image/png -> save as image

This particular question has been asked and answered, but no matter what I try I cannot get this to work. At this point I'm somewhat ready to toss my computer out the window..
No matter what combinations i try, it still fails at:
oStream.write imagebinarydata
Here is the code with comments:
sFileName = Server.MapPath("grafer/test.png")
ByteArray = Request.Form("imageData")
ByteArray = [DATA-URI String] 'This string shows the image perfectly fine, in an image tag in the top of the page so it should be perfectly ok?
response.write ("Decoded: " & Base64Decode(ByteArray)) '<- Writes 'PNG' ?
Const adTypeBinary = 1
Const adSaveCreateOverWrite = 2
Set oStream = Server.CreateObject("ADODB.Stream")
oStream.type = adTypeBinary
oStream.open
imagebinarydata = Base64Decode(ByteArray)
oStream.write imagebinarydata '<- FAILS
'Error:
'ADODB.Stream error '800a0bb9'
'Arguments are of the wrong type, are out of acceptable range, or are in conflict with one another.
'Use this form to overwrite a file if it already exists
oStream.savetofile sFileName, adSaveCreateOverWrite
oStream.close
set oStream = nothing
response.write("success")
Function Base64Decode(ByVal vCode)
Dim oXML, oNode
Set oXML = CreateObject("Msxml2.DOMDocument.3.0")
Set oNode = oXML.CreateElement("base64")
oNode.dataType = "bin.base64"
oNode.text = vCode
Base64Decode = Stream_BinaryToString(oNode.nodeTypedValue)
Set oNode = Nothing
Set oXML = Nothing
End Function
Function Stream_BinaryToString(Binary)
Const adTypeText = 2
Const adTypeBinary = 1
'Create Stream object
Dim BinaryStream 'As New Stream
Set BinaryStream = CreateObject("ADODB.Stream")
'Specify stream type - we want To save text/string data.
BinaryStream.Type = adTypeBinary
'Open the stream And write text/string data To the object
BinaryStream.Open
BinaryStream.Write Binary
'Change stream type To binary
BinaryStream.Position = 0
BinaryStream.Type = adTypeText
'Specify charset For the source text (unicode) data.
If Len(CharSet) > 0 Then
BinaryStream.CharSet = CharSet
Else
BinaryStream.CharSet = "us-ascii"
End If
'Open the stream And get binary data from the object
Stream_BinaryToString = BinaryStream.ReadText
End Function
If you are trying to save you can use this function
function SaveToBase64 (base64String)
ImageFileName = "test.jpg"
Set Doc = Server.CreateObject("MSXML2.DomDocument")
Set nodeB64 = Doc.CreateElement("b64")
nodeB64.DataType = "bin.base64"
nodeB64.Text = Mid(base64String, InStr(base64String, ",") + 1)
dim bStream
set bStream = server.CreateObject("ADODB.stream")
bStream.type = 1
bStream.Open()
bStream.Write( nodeB64.NodeTypedValue )
bStream.SaveToFile(Server.Mappath("Images/" & ImageFileName), 2 )
bStream.close()
set bStream = nothing
end function

Request.BinaryRead(Request.TotalBytes) throws error for large files

I have code that accepts binary data via POST and reads in an array of bytes. For files larger than 200 Kb, the operation fails. I've checked with my sysadmin (we're running IIS 7) to see if there was a limit in our configuration and he says there is none, and suspects it is a problem with the code. Does anybody here see any potential problems? Here is my code:
Public Sub Initialize
If Request.TotalBytes > 0 Then
Dim binData
binData = Request.BinaryRead(Request.TotalBytes) ' This line fails'
getData binData
End If
End Sub
Private Sub getData(rawData)
Dim separator
separator = MidB(rawData, 1, InstrB(1, rawData, ChrB(13)) - 1)
Dim lenSeparator
lenSeparator = LenB(separator)
Dim currentPos
currentPos = 1
Dim inStrByte
inStrByte = 1
Dim value, mValue
Dim tempValue
tempValue = ""
While inStrByte > 0
inStrByte = InStrB(currentPos, rawData, separator)
mValue = inStrByte - currentPos
If mValue > 1 Then
value = MidB(rawData, currentPos, mValue)
Dim begPos, endPos, midValue, nValue
Dim intDict
Set intDict = Server.CreateObject("Scripting.Dictionary")
begPos = 1 + InStrB(1, value, ChrB(34))
endPos = InStrB(begPos + 1, value, ChrB(34))
nValue = endPos
Dim nameN
nameN = MidB(value, begPos, endPos - begPos)
Dim nameValue, isValid
isValid = True
If InStrB(1, value, stringToByte("Content-Type")) > 1 Then
begPos = 1 + InStrB(endPos + 1, value, ChrB(34))
endPos = InStrB(begPos + 1, value, ChrB(34))
If endPos = 0 Then
endPos = begPos + 1
isValid = False
End If
midValue = MidB(value, begPos, endPos - begPos)
intDict.Add "FileName", trim(byteToString(midValue))
begPos = 14 + InStrB(endPos + 1, value, stringToByte("Content-Type:"))
endPos = InStrB(begPos, value, ChrB(13))
midValue = MidB(value, begPos, endPos - begPos)
intDict.Add "ContentType", trim(byteToString(midValue))
begPos = endPos + 4
endPos = LenB(value)
nameValue = MidB(value, begPos, ((endPos - begPos) - 1))
Else
nameValue = trim(byteToString(MidB(value, nValue + 5)))
End If
If isValid = True Then
intDict.Add "Value", nameValue
intDict.Add "Name", nameN
dict.Add byteToString(nameN), intDict
End If
End If
currentPos = lenSeparator + inStrByte
Wend
End Sub
Here is the error that appears in the logs:
Log Name: Application
Source: Active Server Pages
Date: 11/11/2010 2:15:35 PM
Event ID: 5
Task Category: None
Level: Error
Keywords: Classic
User: N/A
Computer: xxxxx.xxxxx.xxx
Description:
Error: File /path-to-file/loader.asp Line 36 Operation not Allowed. .
Event Xml:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
<System>
<Provider Name="Active Server Pages" />
<EventID Qualifiers="49152">5</EventID>
<Level>2</Level>
<Task>0</Task>
<Keywords>0x80000000000000</Keywords>
<TimeCreated SystemTime="2010-11-11T19:15:35.000Z" />
<EventRecordID>19323</EventRecordID>
<Channel>Application</Channel>
<Computer>PHSWEB524.partners.org</Computer>
<Security />
</System>
<EventData>
<Data>File /mghdev/loader.asp Line 36 Operation not Allowed. </Data>
</EventData>
</Event>
By default the limit for the entity size in a POST request is 200K, hence your error.
You can increase that limit open IIS Manager and navigate the tree to your application. Double click the "ASP" icon in the main panel. Expand the "Limits" category. Modify the "Maximum Requesting Entity Body Limit" to a larger value.
If this is for a public web-site be careful as to the limit you set, the purpose of the limit is to prevent malicious POSTs overwhelming the site.
If you read the specifications of the BinaryRead method, you will see that the parameter is actually an out parameter as well. The BinaryRead method is trying to change the value of Request.TotalBytes which it can't do. TotalBytes is read-only.
You can easily fix this by assigning TotalBytes to a variable and passing that in instead. This is what the example code shows in the MSDN documentation.
If the BinaryRead read a different amount of data, the variable will reflect the size of the read.
Two Settings are required in IIS under the "Limit Properties" section
1- Maximum Requesting Entity Body Limit (please not that it is in bytes). You have to set the value according to your maximum file size e-g- 40MB(40000000 bytes).
2)- Script Time-out . Its default value is "00:01:30: which is 90 seconds. Increase it according to the time required by your code to run. I set it to 5 minutes and it solved the problem.

flex URLLoader get Location header

I'm sending POST request using URLLoader and URLRequest with XML data. Then API sends response with redirect page(Location header) and i want to get this URL. How do I catch this response?
UPD:
Event.COMPLETE in debugger:
event = flash.events.Event (#6e1edf9)
bubbles = false
cancelable = false
currentTarget = flash.net.URLLoader (#418e241)
[inherited] =
bytesLoaded = 1
bytesTotal = 0
data = " "
dataFormat = "text"
stream = flash.net.URLStream (#77c5fb9)
[inherited] =
bytesAvailable = 0
connected = true
endian = "bigEndian"
objectEncoding = 3
eventPhase = 2
target = flash.net.URLLoader (#418e241)
[inherited] =
bytesLoaded = 1
bytesTotal = 0
data = " "
dataFormat = "text"
stream = flash.net.URLStream (#77c5fb9)
type = "complete"
Listen on the httpResponseStatus event of the URLLoader. The event details contains a property called responseHeaders that can provide you with location header. See http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/URLLoader.html#event:httpResponseStatus for details.
Also, to prevent redirect you can set followRedirects on URLRequest to false. See http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/URLRequest.html#followRedirects for details.

Resources