I don't understand how many file,i have to create - encryption

Implement the Caesar Cipher algorithm to encrypt and decrypt a file contents using C language. The cipher basic all use algorithm . Your program should have two C files named encrypt.c and decrypt.c that contains encrypt() and decrypt() functions correspondently for the purpose. In the encryption.c file, use the main() function to take input from a “input.txt” file and store the encrypted message to “enc_msg.txt” file. In the decryption.c file, use the main() function to take input from a “enc_msg.txt” file and store the decrypted message to “dec_msg.txt” file and print the decrypted message in console output as well. The key is 3.
Thanks

Create two .c files encrypt.c and decrypt.c
Create sample data input.txt file
Run your encrypt program to create output file enc_msg.txt from input.txt file
Run your decrypt program to create output file dec_msg.txt from input enc_msg.txt file
So you need to create 3 files encrypt.c decrypt.c and input.txt
And running your programs will generate two more files enc_msg.txt and dec_msg.txt

Related

AES256 encrypted data unable to be copied and paste

I am using OpenSSL to encrypt my data. Assuming I have 3 rows of data(for simplicty)
0123456789
987654321
121212121
After encrypting, I get
Salted__èøm¬è!^¬ü
?‘¡ñ1•yÈ}, .◊¬ó≤|Úx$mø©
However, when I copy using my Mac's CMD+ C, then I paste in another file to be decrpyted, i get this error
bad decrypt
0076160502000000:error:1C80006B:Provider routines:ossl_cipher_generic_block_final:wrong final block length:providers/implementations/ciphers/ciphercommon.c:429:
However if I did not copy and paste the encrpyted data, it can be decrypted properly. I believe is due to the spacings changed.. Is it that we cannot copy the data to another file to be decrpyted and MUST use the exact file that was encrpyted?

QMake - Erratic behaviour When using echo System Command

Using QMake, I read some boiler plate code, make modifications and write the modified code to a file.
However, I get very strange results. I have simplified the problem down to the following:
# Open boiler plate file
interfaceBoilerPlateCode = $$cat($$boilerPlateFile, blob)
# Make sure we read the right content
message("Content read: $$interfaceBoilerPlateCode")
# Write the read text into a file
output = $$system(echo $$interfaceBoilerPlateCode >> $$targetFile) # Doesnt work
output = $$system(echo "Howde" >> $$targetFile) # This works
The file being read is a plain text file containing only the string "Howde".
The contents of the file get read correctly.
However, when I try and write the contents of the file to another target file, I get no output (literally: no errors/warnings but no new file generated). However, if I use echo with just a string defined in the code itself (as in the last line of snippet above), a new file gets generated with the string "Howde" inside it.
What is going on? What am I doing wrong that the penultimate line does not generate a new file?
Use write_file. Instead of:
$$system(echo $$content >> $$file_path)
use
write_file($$file_path, $$content)

How do I convert my 5GB 1 liner file to lines based on pattern?

I have a 5GB 1 liner file with JSON data and each line starts from this pattern "{"created". I need to be able to use Unix commands on my Mac to convert this monster of a 1 liner into as many lines as it deserves. Any commands?
ASCII English text, with very long lines, with no line terminators
If you have enough memory you can open the file once with the TextWrangler application (the free BBEdit cousin) and use regular search/replace on the whole file. Use \r in replace to add a return. Will be very slow at opening the file, may even hang if low on memory, but in the end it may probably work. No scripting, no commands,.. etc.. I did this with big SQL files and sometimes it did the job.
You have to replace your line-start string with the same string with \n or \r or \r\n in front of it.
Unclear how it can be a “one liner” file but then each line starts with "{"created", but perhaps python -mjson.tool can help you get started:
cat your_source_file.json | python -mjson.tool > nicely_formatted_file.json
Piping raw JSON through ``python -mjson.tool` will cleanly format the JSON to be more human readable. More info here.
OS X ships with both flex and bison, you can use those to write a parser for your data.
You can use PHP as a shell command (if PHP is installed), just save a text file with name "myscript" and appropriate code (I cannot test code now, but the idea is as follows)
UNTESTED CODE
#!/usr/bin/php
<?php
$REPLACE_STRING='{"created'; // anything you like
// open input file with fopen() in read mode
$inFp=fopen('big_in_file.txt', "r");
// open output file with fopen() in write mode
$outFp=fopen('big_out_file.txt', "w+");
// while not end of file
while (!feof($inFp)) {
// read file chunks here with fread() in variable $chunk
$chunk = fread($inFp, 8192);
// do a $chunk=str_replace($REPLACE_STRING,"\r".$REPLACE_STRING; // to add returns
// (or use \r\n for windows end of lines)
$chunk=str_replace($REPLACE_STRING,"\r".$REPLACE_STRING,$chunk);
// problem: if chunk contains half the string at the end
// easily solved if $REPLACE_STRING is a one char like '{'
// otherwise test for fist char { in the end of $chunk
// remove final part and save it in a var for nest iteration
// write $chunk to output file
fwrite($outFp, $chunk);
// End while
}
?>
After you save it you must make it executable whith sudo chmod a+x ./myscript
and then launch it as ./myscript in terminal
After this, the myscript file is a full unix command

Process many EDI files through single MFX

I've created a mapping in MapForce 2013 and exported the MFX file. Now, I need to be able to run the mapping using MapForce Server. The problem is, I need to specify both the input EDI file and the output file. As far as I can tell, the usage pattern is to run the mapping with MapForce server using the input/output configuration in the MFX itself, not passed in on the command line.
I suppose I could change the input/output to some standard file name and then just write the input file to that path before performing the mapping, and then grab the output from the standard output file path when the mapping is complete.
But I'd prefer to be able to do something like:
MapForceServer run -in=MyInputFile.txt -out=MyOutputFile.xml MyMapping.mfx > MyLogFile.txt
Is something like this possible? Perhaps using parameters within the mapping?
There are two options that I've come across in dealing with a similar situation.
Option 1- If you set the input XML file to *.xml in the component settings, mapforceserver.exe will automatically process all txt in the directory assuming your source is xml (this should work for text just the same). Similar to the example below you can set a cleanup routine to move the files into another folder after processing.
Note: It looks in the folder where the schema file is located.
Option 2 - Since your output is XML you can use Altova's raptorxml (rack up another license charge). Now you can generate code in XSLT 2.0 and use a batch file to automatically execute, something like this.
::#echo off
for %%f IN (*.xml) DO (RaptorXML xslt --xslt-version=2 --input="%%f" --output="out/%%f" %* "mymapping.xslt"
if NOT errorlevel 1 move "%%f" processed
if errorlevel 1 move "%%f" error)
sleep 15
mymapping.bat
I tossed in a sleep command to loop the batch for rechecking every 15 seconds. Unfortunately this does not work if your output target is a database.

unix: can i write to the same file in parallel without missing entries?

I wrote a script that executes commands in parallel. I let them all write an entry to the same log file. It does not matter if the order is wrong or entries are interleaved, but i noticed that some entries are missing. I should probably lock the file before writing, however, is it true that if multiple processes try to write to a file simultaneously, it will result in missing entries?
Yes, if different processes independently open and write to the same file, it may result in overlapping writes and missing data. This happens because each process will get its own file pointer, that advances only by local writes.
Instead of locking, a better option might be to open the log file once in an ancestor of all worker processes, have it inherited across fork(), and used by them for logging. This means that there will be a single shared file pointer, that advances when any of the processes writes a new entry.
In a script you should use ">> file" (double greater than) to append output to that file. The interpreter will open the destination in "append" mode. If your program also wants to append, follow the directives below:
Open a text file in "append" mode ("a+") and give preference to printing only full lines (don't do multiple 'print' followed by a final 'println', but print the entire line with a single 'println').
The fopen documentation states this:
DESCRIPTION
The fopen() function opens the file whose pathname is the
string pointed to by filename, and associates a stream with
it.
The argument mode points to a string beginning with one of
the following sequences:
r or rb Open file for reading.
w or wb Truncate to zero length or create file
for writing.
a or ab Append; open or create file for writing
at end-of-file.
r+ or rb+ or r+b Open file for update (reading and writ-
ing).
w+ or wb+ or w+b Truncate to zero length or create file
for update.
a+ or ab+ or a+b Append; open or create file for update,
writing at end-of-file.
The character b has no effect, but is allowed for ISO C
standard conformance (see standards(5)). Opening a file with
read mode (r as the first character in the mode argument)
fails if the file does not exist or cannot be read.
Opening a file with append mode (a as the first character in
the mode argument) causes all subsequent writes to the file
to be forced to the then current end-of-file, regardless of
intervening calls to fseek(3C). If two separate processes
open the same file for append, each process may write freely
to the file without fear of destroying output being written
by the other. The output from the two processes will be
intermixed in the file in the order in which it is written.
It is because of this intermixing that you want to give preference to
using only 'println' (or its equivalent).

Resources