Whats the correct procedure for sanitizing a Solid State Drive? - solid-state-drive

SSD's being an entirely different kettle of fish from regular mechanical hard drivers, what is the correct procedure for sanitizing one, assuming that I want to re-use it. Waist not, want not.
It's a Corsair SSD.

if this is marked as an answer, I would caution anyone to first understand TRIM and how it may or may not be applied for your application with regards to the hardware, and software within your operating system, and understand the capabilities of a given SSD (make/model/firmware) with regards to TRIM.
no action required only if TRIM is enabled and working as expected on a single SSD connected directly via SATA & AHCI as a single drive. This is for wiping free space, it eventually happens inherently in the background.
If you look up the definition of TRIM it forces all blocks of memory to zero that have been marked as deleted- which is why so many articles talk about how it's impossible to recover data on SSD with TRIM.
But beware this may not apply to SSD's used in storage devices such as a NAS or in RAID where TRIM might be disabled or managed by the hardware. Again search on data recovery TRIM and read articles from reputable sources, and make sure you recognize how you are using your ssd. Obviously if you are really concerned about this you should also verify data can't be recovered using a data recovery tool.
Also beware that when you delete a file and empty the trash bin, that TRIM may not happen immediately after the delete. If enabled we only know TRIM will eventually happen, which is why if you are really concerned about wiping free space you should understand how to manually force TRIM to happen.
the article "TRIMcheck: Does Your SSD Really have TRIM Working?" dated Feb. 24, 2013 at http://www.thessdreview.com/daily-news/latest-buzz/trimcheck-does-your-ssd-really-have-trim-working/
describes the issue rather well.
if you are interested in sanitizing a SSD as in wiping the entire drive then the best method in my opinion is performing an ATA secure erase on the drive, ideally with a toolkit provided by the manufacturer of the SSD and following their instructions which will be supported by the SSD's firmware and actually work.
For the Corsair SSD in question, there is the Corsair SSD Toolbox which can be downloaded from the corsair website.
For any other SSD make, such as samsung, search on secure erase samsung.

Related

Downloading data directly to volatile memory

When you download a file from the internet whether it be a FTP request, a Peer to Peer connection, ext. you are always prompted with a window asking where to store the file on your HDD or SSD, maybe you have a little NAS enclosure in your house.. either way you put it this information is being stored to a physical drive and the information is not considered volatile. It is stored digitally or magnetically and readily available to you even after the system is restarted.
Is it possible for software to be programmed to download and store information directly to a designated location in RAM without it ever touching a form of non-volatile memory?
If this is not possible can you please elaborate on why?
Otherwise if this is possible, if you could give me examples of software that implement this, or perhaps a scenario where this would be the only resolution to generate a desired outcome?
Thank you for the help. I feel this must be possible, however, I cant think of anytime I've encountered this and google doesn't seem to understand what I'm asking.
edit: This is being asked from the perspective of a novice programmer; someone who is looking into creating something like this. I seem to have over-inflated my own question. I suppose what I mean to ask is as follows:
How is software such as RAMDisk programmed, how exactly does it work, and are heavily abstract languages such as C# and Java incapable of implementing such a feature?
This is actually not very hard to do if I understand your request correctly. What you're looking for is tmpfs[1].
Carve our a tmpfs partition (if /tmp isn't tmpfs for you by default), mount it at a location, say something like /volative.
Then you can simply configure your browser or whatever application to download all files to folder/directory henceforth. Since tmpfs is essentially ram mounted as a folder, it's reset after reboot.
Edit: OP asks for how tmpfs and related ram based file systems are implemented. This is something that is usually Operating system specific, but the general idea probably remains the same: The driver responsible for the ram file system mmap() the required amount of memory and then exposes that memory in a way file system APIs typical to your operating system (For example POSIX-y operations on linux/solaris/bsd) can access it.
Here's a paper describing the implemention of tmpfs on solaris[2]
Further note: If however you're trying to simply download something, use it and delete it without ever hitting disk in a way that's internal entirely to your application, then you can simply allocate memory dynamically based on the size of whatever you're downloading, write bytes into allocated memory and free() it once you're done using it.
This answer assumes you're on a Linux-y operating system. There are likely similar solutions for other operating systems.
References:
[1] https://en.wikipedia.org/wiki/Tmpfs
[2] http://www.solarisinternals.com/si/reading/tmpfs.pdf

What are the options for transferring 60GB+ files over the a network?

I'm about to start developing an application to transfer very large files without any rush but with need of reliability. I would like people that had worked coding such a particular case give me an insight of what I'm about to get into.
The environment will be intranet ftp server> so far using active ftp normal ports windows systems. I might need to also zip up the files before sending and I remember working with a library once that would zip in memory and there was a limit on the size... ideas on this would also be appreciated.
Let me know if I need to clarify something else. I'm asking for general/higher level gotchas if any not really detail help. I've done apps with normal sizes (up to 1GB) before but this one seems I'd need to limit the speed so I don't kill the network or things like that.
Thanks for any help.
I think you can get some inspiration from torrents.
Torrents generally break up the file in manageable pieces and calculate a hash of them. Later they transfer them piece by piece. Each piece is verified against hashes and accepted only if matched. This is very effective mechanism and let the transfer happen from multiple sources and also let is restart any number of time without worrying about corrupted data.
For transfer from a server to single client, I would suggest that you create a header which includes the metadata about the file so the receiver always knows what to expect and also knows how much has been received and can also check the received data against hashes.
I have practically implemented this idea on a client server application but the data size was much smaller, say 1500k but reliability and redundancy were important factors. This way, you can also effectively control the amount of traffic you want to allow through your application.
I think the way to go is to use the rsync utility as an external process to Python -
Quoting from here:
the pieces, using checksums, to possibly existing files in the target
site, and transports only those pieces that are not found from the
target site. In practice this means that if an older or partial
version of a file to be copied already exists in the target site,
rsync transports only the missing parts of the file. In many cases
this makes the data update process much faster as all the files are
not copied each time the source and target site get synchronized.
And you can use the -z switch to have compression on the fly for the data transfer transparently, no need to boottle up either end compressing the whole file.
Also, check the answers here:
https://serverfault.com/questions/154254/for-large-files-compress-first-then-transfer-or-rsync-z-which-would-be-fastest
And from rsync's man page, this might be of interest:
--partial
By default, rsync will delete any partially transferred
file if the transfer is interrupted. In some circumstances
it is more desirable to keep partially transferred files.
Using the --partial option tells rsync to keep the partial
file which should make a subsequent transfer of the rest of
the file much faster

ZODB in-memory backend?

I'm currently working on a fairy large project (active members is about hundreds K) and was strongly lean to Plone solutions.
I've asked some questions related to it like here and here.
Got some replies from very experienced Plonistas (and active stackoverflowers as well). I really appreciate it. People keeps saying Plone does not scale well to that large, and most of the reasons is because of ZODB.
Then I think of an in-memory backend for ZODB. RAM is really cheap now ! you can get 128GB for just ~$3k, ten times over a normal $300 128GB SSD, and achieve ~30GBs IO bandwidth compare to that ~300MBs of the SSD.
in-memory backend + Blob for binary + 10s disk journalling for backup + all undos except last 10s would be an instance kill ! They should smoke the RDBMs and offer full ACID + Transaction + Object Mapping compare to such couch*/redis etc.
Is it technical possible ? Is there any implementation ? Does it worth implement (in your opinion) ?
There is a memcache option for RelStorage which helps when you need to use a slow database, but really you should probably just leave that sort of caching to your operating system and make sure your database server has plenty of RAM. (If your RAM is large enough then your filesystem cache should already store most of the data.)
An SSD will significantly reduce the worst case read latencies for random access to data not already in the filesystem cache. It seems silly not to use them now, especially as the Intel 330 SSD is so cheap and has a capacitor equivalent to a battery backed raid controller (making writes superfast too.)
An all in RAM solution can never be considered ACID, as it won't be Durable.
As mentioned in my comment on your other post, it is not the ZODB that is the problem here but Plone's synchronous use of a single contended portal_catalog.
Instead of keeping the entire ZODB in memory, you could mount the portal_catalog in a separate mount point and keep it in memory. I've already seen such kind of configuration and it works smoothly for about 8k users using standard hardware (2 server + 1 zeo server). It may be sufficient for your needs, maybe using more performant hw.

How can I use hardware solutions to create "unbreakable" encryption or copy protection?

Two types of problems I want to talk about:
Say you wrote a program you want to encrypt for copyright purposes (eg: denying unlicensed user from reading a certain file, or disabling certain features of the program), but most software-based encryption can be broken by hackers (just look at the amount of programs available to HACK programs to become "full versions". )
Say you want to push a software to other users, but want to protect against piracy (ie, the other user making a copy of this software and selling it as their own). What effective way is there to guard against this (similar to music protection on CD's, like DRM)? Both from a software perspective and a hardware perspective?
Or are those 2 belong to the same class of problems? (Dongles being the hardware / chip based solution, as many noted below)?
So, can chip or hardware based encryption be used? And if so, what exactly is needed? Do you purchase a special kind of CPU, special kind of hardware? What do we need to do?
Any guidance is appreciated, thanks!
Unless you're selling this program for thousands of dollars a copy, it's almost certainly not worth the effort.
As others have pointed out, you're basically talking about a dongle, which, in addition to being a major source of hard-to-fix bugs for developers, is a also a major source of irritation for users, and there's a long history of these supposedly "uncrackable" dongles being cracked. AutoCAD and Cubase are two examples that come to mind.
The bottom line is that a determined enough cracker can still crack dongle protection; and if your software isn't an attractive enough target for the crackers to do this, then it's probably not worth the expense in the first place.
Just my two cents.
Hardware dongles, as other people have suggested, are a common approach for this. This still doesn't solve your problem, though, as a clever programmer can modify your code to skip the dongle check - they just have to find the place in your code where you branch based on whether the check passed or not, and modify that test to always pass.
You can make things more difficult by obfuscating your code, but you're still back in the realm of software, and that same clever programmer can figure out the obfuscation and still achieve his desired goal.
Taking it a step further, you could encrypt parts of your code with a key that's stored in the dongle, and require the bootstrap code to fetch it from the dongle. Now your attacker's job is a little more complicated - they have to intercept the key and modify your code to think it got it from the dongle, when really it's hard-coded. Or you can make the dongle itself do the decryption, passing in the code and getting back the decrypted code - so now your attacker has to emulate that, too, or just take the decrypted code and store it somewhere permanently.
As you can see, just like software protection methods, you can make this arbitrarily complicated, putting more burden on the attacker, but history shows that the tables are tilted in favor of the attacker. While cracking your scheme may be difficult, it only has to be done once, after which the attacker can distribute modified copies to everyone. Users of pirated copies can now easily use your software, while your legitimate customers are saddled with an onerous copy protection mechanism. Providing a better experience for pirates than legitimate customers is a very good way to turn your legitimate customers into pirates, if that's what you're aiming for.
The only - largely hypothetical - way around this is called Trusted Computing, and relies on adding hardware to a user's computer that restricts what they can do with it to approved actions. You can see details of hardware support for it here.
I would strongly counsel you against this route for the reasons I detailed above: You end up providing a worse experience for your legitimate customers than for those using a pirated copy, which actively encourages people not to buy your software. Piracy is a fact of life, and there are users who simply will not buy your software even if you could provide watertight protection, but will happily use an illegitimate copy. The best thing you can do is offer the best experience and customer service to your legitimate customers, making the legitimate copy a more attractive proposition than the pirated one.
They are called dongles, they fit in the USB port (nowadays) and contain their own little computer and some encrypted memory.
You can use them to check the program is valud by testing if the hardware dongle is present, you can store enecryption keys and other info in the dongle or sometimes you can have some program functions run in the dongle. It's based on the dongle being harder to copy and reverse engineer than your software.
See deskey or hasp (seem to have been taken over)
Back in the day I've seen hardware dongles on the parallell port. Today you use USB dongles like this. Wikipedia link.

Why isn't bittorrent more widespread? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I suppose this question is a variation on a theme, but different.
Torrents will never replace HTTP, or even FTP download options. This said, why aren't there torrent links next to those options on more websites?
I'm imagining a web-system whereby downloaded files are able to be downloaded via HTTP, say from http://example.com/downloads/files/myFile.tar.bz2, torrents can be cheaply autogenerated and stored in /downloads/torrents/myFile.tar.bz2.torrent, and the tracker might be /downloads/tracker/.
Trackers are a well defined problem, and not incredibly difficult to implement, and there are many drop in place alternatives out there already. I imagine it wouldn't be difficult to customise one to do what is needed here.
The autogenerated torrent file can include the normal HTTP server as a permanent seed, the extensions to do this are very well supported by most, if not all, of the major torrent clients and requires no reconfiguration or special things on the server end (it uses stock standard HTTP Range headers).
Personally, if I setup such a system, I would then speed limit the /downloads/files/ directory to something reasonable, say maybe 40-50kb/s, depending on what exactly you were trying to serve.
Does such a file delivery system exist? Would you use it if it did: for your personal, company, or other website?
first of all: http://torrent.ubuntu.com/ for torrents on ubuntu.
second of all: opera has a built in torrent client.
third: I agree with the stigma attached to p2p. So much so that we have sites that need to be called legaltorrents and such like because by default a torrent would be an illegal thing, and let us not kid ourselves, it is.
getting torrents into the main stream is an excellent idea. you can't tamper with the files you are seeding so there is no risk there.
the big reason is not really stigma. the big reason is analytics, and their protection. with torrents these people (companies like microsoft and such like) would not be able to gather important information about who is doing the downloads (not personally identifiable information, and quickly aggregated away). with torrents, other people would be able to see this information, at least partially. A company would love to seed the torrent of an evaluation version of a competing companys product, just to get an idea of how popular it is and where it is getting downloaded from. It is not as good as hosting the download on your webservers, but it is the next best thing.
this is possibly the reason why the vista download on microsofts sites, or its many service packs and SDKs are not in torrents.
Another thing is that people just wont participate, and that is not difficult to figure out why because of the number of hoops you have to jump through. you got to figure out the firewall, the NAT thing, and then uPNP thing, and then maybe your ISP is throttling your bandwidth, and so on.
Again, I would (and I do) seed my 1.5 times or beyond for the torrents that I download, but that is because these are linux, openoffice that sort of thing. I would probably feel funny seeding adobe acrobat, or some evaluation version or something, because those guys are making profits and I am not a fool to save money for them. Let them pay for http downloads.
edit: (based on the comment by monoxide)
For the freeware out there and for SF.net downloads, their problem is that they cannot rely on seeders and will need their fallback on mirrors anyway, so for them torrents is adding to their expense. One more reason that pops to mind is that even in software shops, Internet access is now thoroughly controlled, and ports on which torrents rely plus the upload requirement is absolutely no-no. Since most people who need these sites and their downloads are in these kinds of offices, they will continue to use http.
BUT even that is not the answer. These people have in their licensing terms restrictions on redistribution. And so their problem is this: if you are seeding their software you are redistributing it. That is a violation of their licensing terms so if they host a torrent download and allow you to seed it, that is entrapment and they can be sued (I am not a lawyer, I learn from watching TV). They have to then delicately change their licensing to allow distribution by seeding torrents but not otherwise. This is an easy enough concept for most of us, but the vagaries of the English language and the dumb hard look on the face of the judge make it a very tricky thing to do. The judge may personally understand torrents, but sitting up their in the court he has to frown and pretend not to because it is not documented in legalese.
That there is the ditch they have dug and there they fall into it. Let us laugh at them and their misery. Yesterdays smart is todays stupid.
Cheers!
I'm wondering if part of it is the stigma associated with torrents. The only software that I see providing torrent links are Linux distros, and not all of them (for example, the Ubuntu website does not provide torrents to download Ubuntu). However, if I said I was going to torrent something, most people associate it with illegal downloads (music, video, TV shows, etc).
I think this might come from the top. An engineer might propose using a torrent system to provide downloads, yet management shudders when they hear the word "torrent".
That said, I would indeed use such a system. Although I doubt I would be able to seed at home (I found that the bandwidth kills the connection for everyone else in the house). However, at school, I probably would not only use such a system, but seed for it as well.
Another problem, as mentioned in the other question, is that torrent software is not built into browsers. Until it is, you won't see widespread use of it.
Kontiki (which is very similar to bittorrent), makes up about 10% of all internet traffic by volume in the UK, and is exclusively used for legal distribution of "big media" content.
There are people who won't install a torrent client because they don't want the RIAA sending them extortion letters and running up legal fees in court when they (the RIAA) break into your computer and see MP3 files that are completely legal backup copies of CDs that were legally purchased.
There's a lot of fear about torrents out there and I'm not comfortable with any of the clients that would allow even limited access to my PC because that's the "camel's nose in the tent".
The other posters are correct. There is a huge stigmata against Torrent files in general due to their use by hackers and people who violate copyright law. Look at PirateBay, that is all they "serve" are torrent files. A lot of cable companies in the US have started traffic shaping Torrent traffic on their networks as well because it is such a bandwidth hog.
Remember that torrents are not a download accellerator. They are meant to offload someone who cannot afford (or maybe just doesn't desire) to pay for all the bandwidth themselves. The users who are seeding take the majority of the load. No one seeding? You get no files.
The torrent protocol is also horrible for being so darn chatty. As much as 40% of your communications on the wire can be control flow messages and chat between clients asking for pieces. This is why cable companies hate it so much. There are some other problems of the torrent end game (where it asks a lot of people for final parts in an attempt to complete the torrent but can sometimes end up with 0 available parts so you are stuck with 99% and seeding for everyone).
http is also pretty well established and can be traffic shaped for load balancers, etc. So most legit companies that serve up their content can afford to host it, or use someone like Akamai to repeat the data and then load balance.
Perhaps its the ubiquity of http-enabled browsers, you don't see so much FTP download links anymore, so that could be the biggest factor (ease of use for the end-user).
Still, I think torrent downloads are a valid alternative, even if they won't be the primary download.
I even suggested Sourceforge auto-generate torrents for downloads, and they agreed it was a good idea.. but havn't implemented it (yet). Here's hoping they will.
Something like this actually exists at speeddemosarchive.com.
The server hosts a Metroid Prime speedrun and provides a permanent seed for it.
I think that it's a very clever idea.
Contrary to your idea, you don't need an HTTP URL.
I think one of the reasons is that (currently) torrent links are not fully supported inside web browser... you have to fire up the torrent client and so on.
Maybe is time for a little firefox extension/plugin? Damn, now I am at work! :)

Resources