We need to check the technical feasibility to call a Asp.NET web api from korn shell script.
The purpose is to log the start and successful or error end of the Koran shell script via Web API in a log table.
What can be done to achieve this requirement?
I have not yet tried anything as I am completely new in this territory and is not aware from where to start. Please guide me through this.
Related
I am trying to get SQLMAP tool to test the possibilities of SQL injection on my asp.net web application which has forms authentication. But I am not getting any clear directions on this. I have tried my hands on numerous forums and found nothing concrete for ASP.NET web application. Most of the demos are provided for PHP sites, which does not work like ASP.NET.
When I try to run
sqlmap.py -u "https://test.XXXX_SIT/login.aspx" --dbs
command, I end up getting the below response from the SQLMAP console
I am new to this aspect of security testing and I am even open for any better and simple free tool that does the job for me.
Please let me know the possible solution or other better possibilities to achieve this.
Regards,
Krishna Samaga B.
You need parameters to test.
sqlmap.py -u "https://test.XXXX_SIT/login.aspx?login=1&pass=2" --dbs
for example login and pass are GET parameters.
or
sqlmap.py -u "https://test.XXXX_SIT/login.aspx" --data='login=1&pass=2' --dbs
if you're sending POST request with login and pass
I am in the process of converting our legacy custom database deployment process with custom built tools into a full fledged SSDT project. So far everything has gone very well. I have composite projects that can deploy a base database as well as projects that deploy sample and test data.
The problem I am having now is finding a solution for running some sort of code that can call a web service to get an activation code and add it to the database as the final step of the process. Can anyone point me to a hook that I might be able to use?
UPDATE: To be clearer I am doing this to make it easier to maintain and deploy our sample and test data to a local machine. We can easily use Jenkins to activate the sites when they are deployed nightly to our official testing environments. I'm just hoping to be able to do this in a single step to replace the homegrown database deploy tool that we use now.
In my deployment scenario I wrapped database deployment process in some powershell scripts which do necessary prerequisites. For example:
powershell script is started and then it stops some services
next it run sqlpackage.exe or preproduced sql deployment scripts
finally powershell script starts services.
You can pass some parameters from powershell to sql scripts or sqlpackage.exe as sqlcmd variables. So you can call webservice first, then pass activation code as sqlcmd variable and use the variable in postdeployment script.
Particularly if it's the final step, I'd be tempted to do this separately, using whatever tool you're using to do the deployment: Powershell scripts, msbuild, TFS, Jenkins, whatever. Presumably there's also a front-end of some description that gets provisioned this way?
SSDT isn't an eierlegende Wollmilchsau, it's a set of tools for managing database changes.
I suspect if the final step were "provision a Google App Engine Instance and deploy a Python script", for example, it wouldn't appear to be a natural candidate for inclusion in an SSDT post-deploy script, and I reckon this falls into the same category.
I am running the Quartz.Net server as a Windows service, like described in the documentation. I am trying to understand how I can create new jobs for Quartz to schedule, without the need to rebuild the Quaretz.net server application everytime.
I would like to be able to add new jobs from an exe, dll, or other options welcome. This way I can add jobs dynamically. From what I can tell it seems all jobs must be defined up front and built into the server. From there the user can pass parameters and enable triggers via XML file. I am using MS SQL Server instead of XML file for persistence layer.
My use case is I need to generate reports at particular times, but the users can create new reports after launch of my application. I am using Dev Express for my reporting (not sure if this matters).
Any guidance is very appreciated.
You should check out the work Tolis Bekiaris did on the eXpand Framework's JobScheduler. It's a module for DevExpress's XAF and Quartz.NET which should give you plenty of sample code, especially if you are already using XPO for your data.
You can get the source code here.
Or alternatively, it's on Github.
You'll find the job scheduler code in eXpand/Xpand/Xpand.ExpressApp.Modules/JobScheduler.
I'm implementing a very light weight (embedded) OSGi framework which runs on a target piece of hardware. To attach a console I'm using org.apache.felix.gogo.shell and org.apache.felix.shell.remote.
To date, I've logged all custom messages using System.out.println which has worked fine, but now that I'm using the remote console I require something that will allow me to 'print' my messages to the OSGi console (and hopefully appear both on the target's console as well as the telnet console provided by felix.shell.remote).
I'm guessing there must be a way to get a handle to an OutputStream (or similar) to do this; My question is how? It seems that most people redirect their stdout etc. to solve problems like this.
I'm using declarative services, so I was hoping to be able to setup a component which attaches a referenced service (not important, but would make it nice and neat).
Any help is greatly appreciated.
The best way is to use logging for custom messages using the OSGi Log Service. That way you can get recent logs from the LogReader service from inside your shell or webconsole. If you insist on using popular frameworks like log4j etc. then you can get a bridge with Pax logging.
Alternatively, redirecting the output to a file in a known location works. You can then make a command in gogo that views that file or provide a tail function that continuously displays the new parts of the file.
I'm stuck on this one. I hope someone here has some experience with this. Here is the situation. I have set up a web page that allows users to upload flat files to be loaded into SQL Server 2005 using SSIS. There are two difference SSIS processes depending on the file type. The decision of which SSIS process to use is made by the user on the website.
Once the file is uploaded by the user the process is started by a .NET Process object. The command line is the normal command line you'd expect to see to start dtexec with a specific SSIS file and that sets a couple variables. For example:
dtexec /f /De /set value
The ASP.NET Anonymous User is running as a domain user account. All SSIS package files for both SSIS processes are in the same directory. The domain user account has full privileges on that directory. The same method in ASP.NET starts either of the processes. The only difference is the WebMethod called by the website. One WebMethod for each type. It is in these WebMethods where the unique arguments are assigned to the command line text for SSIS.
Here is where I have run into the problem. When running the website process "1", it runs fine, but process "2" fails with the error mentioned above. When I capture the Standard Output I receive this:
Microsoft (R) SQL Server Execute
Package Utility Version 9.00.4035.00
for 32-bit Copyright (C) Microsoft
Corp 1984-2005. All rights reserved.
Started: 10:34:14 AM Could not create
DTS.Application because of error
0x800401F3 Started: 10:34:14 AM
Finished: 10:34:14 AM Elapsed: 0.016
seconds
I don't understand how everything can be nearly identical yet only one will run. One final thing, both methods work fine when I am testing directly from Visual Studio. I figure it must be something with the Anonymous User account used, but I can't figure out why one process would work and the other not work when they are so similar.
Any help will be greatly appreciated.
Rob
Found the problem. The error code was a phantom. What happened was a Connection Component was being fed by a variable that was holding a path to a folder the new account could not go to. Even though in process it would be replaced with a good target it was failing in validation. This is why there was no logs. I didn't have the logging level high enough to see it and it acted like a security issue. Which is was in a way of looking at it.