Question 1
I have server side script, which connects to different data storages (ex. website database, ip-telephony server, web-analyst api). I storage passwords, tokens and other access data for data storages on server. That script runs by cron every day, not by user request.
In realy, i have many such scripts and hundreds accesses. All of these data storages belog to different companies, data in storages is very important. If server will be hack, hacker gets all of these accesses and companies data.
How can i secure storage for all of these accesses?
Question 2
Script from first question saves information into database. That information have to see only client in browser, programmers or databes admininstrator haven't see saved information.
Is there any way to encrypt data from cron server script and decrypt it only on client side in browser?
Related
I have made one android application in Qt.that employee registration form application. where i distribute this app to other employee they will details (like name, age, salary etc) and that data will be save in one database so where i can access it.
is anyone have idea about?
thanks
As you are looking to store employee details entered in many mobile devices in one place you will need to have a database your app can all write to.
One way would be to have a server running something like MySQL and a web server. You send the data from the mobile app to the web server using an HTTP POST and some code on the server extracts the data from the POST and inserts it into the database.
If you need to show the data in the database on a mobile app, you can use an HTTP GET adding parameters to the end of the query. Some code on the server interprets the GET string, extracts data from the database and sends it as part of the result.
There will be a temptation to connect directly from your mobile app to the database on the server using something like ODBC. Don't do this, its a bad idea. Always have an intermediary application on the server side to give you some insulation against database attacks. If your application is for more than just internal company use, consider having the database on a separate server and configured to accept connections only from the web server.
As you are sending personal and private information (age, salary etc), ensure you encrypt your data properly when its "in flight".
Hopefully this will give you a few pointers to get you started. The question is really quite broad.
I have a web app which my customers use to allow their employees to exchange documents. So, employee A1 can upload a document which employee A2 can later access given that both are employees of customer A. However, employees of any other customers are not allowed to access files uploaded by A1.
I would like to extend the web app to support secure documents. This means that I would have to encrypt document content before storing it on my server. In order to reduce liability I would like NOT to be able to decrypt the document content. So, ideally the content would be decrypted on the client (browser) side.
I have considered solutions that require an extra decryption/encryption service to be deployed on the customer side, but I don't like the extra management overhead that they require.
Assuming that my customers are large corporations and would have the typical infrastructure, e.g. LDAP, how would you propose to solve this problem without deploying extra services in customer environment?
My organisation (a small non-profit) currently has an internal production .NET system with SQL Server database. The customers (all local to our area) submit requests manually that our office staff then input into the system.
We are now gearing up towards online public access, so that the customers will be able to see the status of their existing requests online, and in future also be able to create new requests online. A new asp.net application will be developed for the same.
We are trying to decide whether to host this application on-site on our servers(with direct access to the existing database) or use an external hosting service provider.
Hosting externally would mean keeping a copy of Requests database on the hosting provider's server. What would be the recommended way to then keep the requests data synced real-time between the hosted database and our existing production database?
Trying to sync back and forth between two in-use databases will be a constant headache. The question that I would have to ask you is if you have the means to host the application on-site, why wouldn't you go that route?
If you have a good reason not to host on site but you do have some web infrastructure available to you, you may want to consider creating a web service which provides access to your database via a set of well-defined methods. Or, on the flip side, you could make the database hosted remotely with your website your production database and use a webservice to access it from your office system.
In either case, providing access to a single database will be much easier than trying to keep two different ones constantly and flawlessly in sync.
If a webservice is not practical (or you have concerns about availability) you may want to consider a queuing system for synchronization. Any change to the db (local or hosted) is also added to a messaging queue. Each side monitors the queue for changes that need to be made and then apply the changes. This would account for one of the databases not being available at any given time.
That being said, I agree with #LeviBotelho, syncing two db's is a nightmare and should probably be avoided if you can. If you must, you can also look into SQL Server replication.
Ultimately the data is the same, customer submitted data. Currently it is being entered by them through you, ultimately it will be entered directly by them, I see no need in having two different databases with the same data. The replication errors alone when they will pop-up (and they will), will be a headache for your team for nothing.
Can any one help me in explaining the detailed and proper use of ASP.NET Sessions.
i read many web portals and blogs but i do not understand how to and where to use the sessions.
we create many sessions on page, for login, transfering some values from one page to another. but what is its impact on multiple users like more than 10000 users accessing the website, server transfer rate. memory storage, etc.
This may help many beginners, and also experienced person to properly use sessions in their project.
Any help is appreciated.
This is roughly how it works:
When the user visits your webpage, a session ID is set in a cookie in the user's browser. Each time the browser sends a request to the server, the browser will pass the cookie containing the session ID to the server. This allows the server to recognize the user and associate data with the user across multiple page requests (you can use sessions without cookies if you want to).
The server will by default store this data in memory. However, if multiple webservers are running the application and serving the same user, they will all need to know about the user's session data. Thus, you can configure your application to store session data using the "ASP.NET State Server" Windows service, or you can store the data in a SQL database (or you can write your own Session State Provider and store the data wherever you like). Moreover, storing the session data in memory is obviously a bad choice if you are worried your machine might crash (that obviously should worry you).
As for the "proper and detailed" use of ASP.NET sessions it is hard to say - it depends on what you are trying to achieve.
If you can help it, you should store only small amounts of data in sessions, as the combined sessions of all users visiting your website may take up quite a lot of space. Moreover, if you are using the ASP.NET State Server or the SQL Server session state stores the data you store needs to be serialized and deserialized, which will take a non-trivial amount of time for data of non-trivial size.
If what you are planning to store isn't confidential, an alternative approach might be to store the data in a cookie. That way your server will not have to worry about storing the data at all. This way you are trading memory (or disk space or whatever storage mechanism you choose) for bandwidth, as the cookie will now be part of the payload for every request.
I have a SOAP web service and I'm trying to figure how to save/log the last 10 requests for each user. Each user is required to send their user/pass in each request, so it's easy to know who the request originated from. With these last 10 requests saved, my goal is to develop some sort of page that will allow them to log-in with their credentials and view the raw request, the actual SOAP message, http header information, and anything relevant that I can think of.
The point is to allow people to troubleshoot their own connection issues instead of having to contact me each time they can't connect, have trouble formatting their request, etc....
My first thought was to store all this information in memory in a hashtable or something, but that may have scalability issues when we have hundreds/thousands of users hitting the web service.
We could use our database to store these requests. Instead of hitting the database each time, I may need to create some "buffer" mechanism that will only update the database after the buffer gets to a certain number of requests. Is there an existing library or mechanism that will do this?
We can't store these requests on the file system on the machine hosting the web service. Since these requests can potentially contain sensitive information, it's a business decision that I'll need to work around.
Or maybe there's a better way to achieve what I'm trying to do?
I see two alternatives.
1.- Mix your two approaches: Create the hashtable in memory and, when it hits a limit (say, 1000 requests), push them to the database. No need for a special library for this. The hastable is your in memory buffer.
2.- Set up a totally different process sniffing the requests, and offer the clients to download the packet captures (or process and present them yourself.) This is arguably more work, but separates the request saving from your application. You could even then move the sniffer to another machine if load so demanded.