I have a large oracle query result and want to upload it using http POST.
But with memory constrain, I cannot read all rows at once into memory.
So I read a few rows at a time, but can't find a way to start chunked uploading in R.
If it were in C# it would go something like this
var req = WebRequest.Create("http://myserver/upload");
req.SendChunked=true;
req.method = "POST"
using(var s = req.GetRequestStream()){
while(queryResult.hasRow()){
byte[] buffer = queryResult.readRow();
s.write(buffer,0,buffer.Length);
}
}
resonse = req.getResponse();
Is there anything equivalent in R?
Related
i'm currently making a roblox whitelist system and it's almost finished but i need 1 thing more i scripted it and its not work (code below) i didn't found nothing to fix what i have (script and screenshoot of error below), thanks.
local key = 1
local HttpService = game:GetService("HttpService")
local r = HttpService:RequestAsync({
Url = "https://MyWebsiteUrl.com/check.php?key="..key,
Method = "GET"
})
local i = HttpService:JSONDecode(r.Body)
for n, v in pairs(i) do
print(tostring(n)..", "..tostring(v))
end
I assume the website that you are using to validate the key
returns the response in raw if so then
local key = 1
local HttpService = game:GetService("HttpService")
local r = HTTPService:GetAsync("https://MyWebsiteUrl.com/check.php?key="..key)
local response = JSON:Decode(r)
print(response)
I think this is because you tried to concat a string (the url) with a number (the key variable) try to make the key a string
I'm trying to write a lambda that will return a .WAV file in chunks over HTTP. I've got my actual data in a byte slice (outputPayload [] byte) and am trying to pass it back. While the request seems to run, the response received is of a different length to what I expect and seems to be corrupted. Here's my code:
//Create the necessary headers
responseHeader := make(map[string]string)
responseHeader["Accept-Ranges"] = "bytes"
responseHeader["Content-Range"] = fmt.Sprintf("%s/%d", rangeRequired, fileSize)
responseHeader["Content-Type"] = fileType // this will be "audio/wav"
responseHeader["Content-Length"] = fmt.Sprintf("%d", returnedByteCount)
responseBody := string(outputPayload)
return events.APIGatewayProxyResponse{
StatusCode: http.StatusPartialContent,
Headers: responseHeader,
Body: responseBody,
}, nil
As a basic check, using more at the command line, the start of the original file looks like this:
RIFF$^?^C^#WAVEfmt ^P^#^#^#^A^#^B^#D<AC>^#^#^P<B1>^B^#^D^#^P^#data^#^?^C^#ESC^#^Y^#
While the downloaded file looks like this:
RIFF$^?^C^#WAVEfmt ^P^#^#^#^A^#^B^#D�^#^#^P�^B^#^D^#^P^#data^#^?^C^#ESC^#^Y^#
I'm guessing I have an encoding issue somewhere. My hunch is the string conversion is the problem, but that's the variable type I need for an APIGatewayProxyResponse "Body" component. How do I change my code output to ensure the payload matches the original file?
How can I convert an image to array of bytes using ImageSharp library?
Can ImageSharp library also suggest/provide RotateMode and FlipMode based on EXIF Orientation?
If you are looking to convert the raw pixels into a byte[] you do the following.
var bytes = image.SavePixelData()
If you are looking to convert the encoded stream as a byte[] (which I suspect is what you are looking for). You do this.
using (var ms = new MemoryStream())
{
image.Save(ms, imageFormat);
return ms.ToArray();
}
For those who look after 2020:
SixLabors seems to like change in naming and adding abstraction layers, so...
Now to get a raw byte data you do the following steps.
Get MemoryGroup of an image using GetPixelMemoryGroup() method.
Converting it into array (because GetPixelMemoryGroup() returns a interface) and taking first element (if somebody tells me why they did that, i'll appreciate).
From System.Memory<TPixel> get a Span and then do stuff in old way.
(i prefer solution from #Majid comment)
So the code looks something line this:
var _IMemoryGroup = image.GetPixelMemoryGroup();
var _MemoryGroup = _IMemoryGroup.ToArray()[0];
var PixelData = MemoryMarshal.AsBytes(_MemoryGroup.Span).ToArray();
ofc you don't have to split this into variables and you can do this in one line of code. I did it just for clarification purposes. This solution only viable as for 06 Sep 2020
In my program, i split a file into multiple files and sent it to a WCF rest service which then joins it back to one file. After concatenate, the file size is more than the size of the file sent.
Following is the code to concatenate:
string[] files = Directory.GetFiles(path, string.Concat(guid, "*"),SearchOption.TopDirectoryOnly);
StreamReader fileReader;
StreamWriter fileWriter = new StreamWriter(path + newGuid);
for (Int64 count = 0; count < files.Length; count++)
{
fileReader = new StreamReader(string.Concat(path,guid, count));
fileWriter.Write(fileReader.ReadToEnd());
}
fileWriter.Close();
Are your dealing with only text files because both StreamWriter and StreamReader are meant to be used only for text files and not binary files.
Further, this line fileWriter.Write(fileReader.); appears to be wrong. It should be something like
fileWriter.Write(fileReader.ReadToEnd());
Of course, if your file size is too large, you should be doing reading/writing in chunks or line by line basis.
My array is 140bytes. outArray is 512bytes... Not what i wanted. Also i dont know if i am encrypting properly. Is the code below correct? how do i fix this so outArray is the real size and not fixed with many trailing zeros?
var compress = new SevenZipCompressor();
compress.CompressionLevel = CompressionLevel.Ultra;
compress.CompressionMethod = CompressionMethod.Lzma;
compress.ZipEncryptionMethod = ZipEncryptionMethod.Aes256;
var sIn = new MemoryStream(inArray);
var sOut = new MemoryStream();
compress.CompressStream(sIn, sOut, "a");
byte[] outArray = sOut.GetBuffer();
You are getting the whole MemoryStream buffer, you need to use ToArray(),
byte[] outArray = sOut.ToArray();
This will remove the trailing zeros but you may still get an array bigger than input. There is overhead with compression/encryption, which is probably bigger than 140 bytes.
Many compression algorithms (I'm unfamiliar with the specific details for 7-zip) generate output with a minimum output size. 7-zip performs best on large input data sets, and 140 bytes is not "large". You might do better with something like gzip or lzo. What other compression algorithms have you tried?