Determine Which Hard Drive the Database-Files/Transaction-Files are Stored - unix

I have an old Sybase server whose database is acting up. I have tried rebuilding the file-system and the database file. But the problem returns. I want to replace the hard drive that the database-files and transaction-files are stored. I want to determine exactly which hard drive it is because I am not familiar with Unix. Moreover, I also want to see if those files are stored in the same hard drive as the operating system or not; if they are, I will need to re-install the operating system as well as restoring the database to the new hard drive. Obviously this will be better if the database-files and the transaction-files are not in the same hard drive as the operating system. Please help me to determine these two things.
So far, I have found these:
(1) I use sp_helpdb command and find that the database files and the transaction files are stored in these logical devices:
sybdbs
syblogs
master
sybdbs2
(2) I use sp_helpdevice command to look into those 4 logical devices shown above, and find that those logical devices are in these physical devices:
/dev/rdsk/c0t0d0s1
/dev/rdsk/c0t3d0s4
d_master
/dev/rdsk/sybdbs2
(3) When I use sp_helpdevice to show all the physical devices, I see this:
device_name physical_name description status cntrltype device_number low high
------------------ ------------------------------------------- ------------------------------------------------ ------ --------- ------------- -------- --------
historydump /export/home/syb11.dump/history.dump disk, dump device 16 2 0 0 0
isproddump /export/home/syb11.dump/isprod.dump disk, dump device 16 2 0 0 0
istestdump /export/home/syb11.dump/istest.dump disk, dump device 16 2 0 0 0
master d_master special, physical disk, 100.00 MB 2 0 0 0 51199
masterdump /export/home/syb11.dump/master.dump disk, dump device 16 2 0 0 0
modeldump /export/home/syb11.dump/model.dump disk, dump device 16 2 0 0 0
prodtestdump /export/home/syb11.dump/prodtest.dump disk, dump device 16 2 0 0 0
sybdbs /dev/rdsk/c0t0d0s1 special, default disk, physical disk, 2000.00 MB 3 0 3 50331648 51355647
sybdbs2 /dev/rdsk/sybdbs2 special, physical disk, 1.00 MB 2 0 5 83886080 83886591
syblogs /dev/rdsk/c0t3d0s4 special, physical disk, 850.00 MB 2 0 4 67108864 67544063
sybscurty /dev/rdsk/c0t3d0s5 special, physical disk, 100.00 MB 2 0 2 33554432 33605631
sybsecuritydump /export/home/syb11.dump/sybsecurity.dump disk, dump device 16 2 0 0 0
sybsystemprocsdump /export/home/syb11.dump/sybsystemprocs.dump disk, dump device 16 2 0 0 0
sysprocsdev /dev/rdsk/c0t0d0s4 special, physical disk, 100.00 MB 2 0 1 16777216 16828415
tapedump1 /dev/rmt4 tape, 625 MB, dump device 16 3 0 0 20000
tapedump2 /dev/rst0 disk, dump device 16 2 0 0 20000
uniface724dump /export/home/syb11.dump/uniface724.dump disk, dump device 16 2 0 0 0
uniface7dump /export/home/syb11.dump/uniface7.dump disk, dump device 16 2 0 0 0
(4) I want to know more about those physical devices. I use the df command to examine them:
df -k /dev/rdsk/c0t0d0s1
df -k /dev/rdsk/c0t3d0s4
df -k d_master
df -k /dev/rdsk/sybdbs2
The df command complains that the first three devices are “not a block device, directory or mounted resource”.
On the other hand, the df command shows the following info for the last device:
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c0t3d0s0 576558 371019 147889 71% /
In any case, this doesn’t tell me which drive(s) those devices are on.
(5) When I use the mount command, I see this:
/ on /dev/dsk/c0t3d0s0 read/write/setuid on Mon Jul 6 11:10:46 2015
/usr on /dev/dsk/c0t3d0s6 read/write/setuid on Mon Jul 6 11:10:46 2015
/proc on /proc read/write/setuid on Mon Jul 6 11:10:46 2015
/dev/fd on fd read/write/setuid on Mon Jul 6 11:10:46 2015
/tmp on swap read/write on Mon Jul 6 11:10:49 2015
/export on /dev/dsk/c0t3d0s7 setuid/read/write on Mon Jul 6 11:10:49 2015
/freespace on /dev/dsk/c0t0d0s5 setuid/read/write on Mon Jul 6 11:10:49 2015
/sybase on /dev/dsk/c0t0d0s0 setuid/read/write on Mon Jul 6 11:10:49 2015
/usr/openwin on /dev/dsk/c0t3d0s3 setuid/read/write on Mon Jul 6 11:10:49 2015
I cannot figure out the connection between the mounted devices above to the physical devices for the database-files and the transaction-files. I also cannot link the mounted devices above to the hard drives shown in the next section.
(6) When I use the cat /etc/vfstab command, I see these:
#device device mount FS fsck mount mount
#to mount to fsck point type pass at boot options
#
#/dev/dsk/c1d0s2 /dev/rdsk/c1d0s2 /usr ufs 1 yes -
/proc - /proc proc - no -
fd - /dev/fd fd - no -
swap - /tmp tmpfs - yes -
/dev/dsk/c0t3d0s0 /dev/rdsk/c0t3d0s0 / ufs 1 no -
/dev/dsk/c0t3d0s6 /dev/rdsk/c0t3d0s6 /usr ufs 1 no -
/dev/dsk/c0t3d0s7 /dev/rdsk/c0t3d0s7 /export ufs 2 yes -
/dev/dsk/c0t0d0s5 /dev/rdsk/c0t0d0s5 /freespace ufs 2 yes -
/dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0 /sybase ufs 2 yes -
/dev/dsk/c0t3d0s3 /dev/rdsk/c0t3d0s3 /usr/openwin ufs 2 yes -
/dev/dsk/c0t3d0s1 - - swap - no -
# The following lines have been commented-out to allow Sybase to access these
# partitions and Raw Partitions. Nov-24-1999
# /dev/dsk/c0t0d0s3 /dev/rdsk/c0t0d0s3 /master ufs 2 yes -
# /dev/dsk/c0t0d0s1 /dev/rdsk/c0t0d0s1 /sybdbs ufs 2 yes -
# /dev/dsk/c0t3d0s4 /dev/rdsk/c0t3d0s4 /syblogs ufs 2 yes -
# /dev/dsk/c0t3d0s5 /dev/rdsk/c0t3d0s5 /sybscurty ufs 2 yes -
# /dev/dsk/c0t0d0s4 /dev/rdsk/c0t0d0s4 /sybtemproc ufs 2 yes -
(7) When I use the format command, I see these two hard drives:
AVAILABLE DISK SELECTIONS:
0. c0t0d0 <IBM-DNES-309170-SA30 cyl 11195 alt 2 hd 5 sec 320>
/iommu#f,e0000000/sbus#f,e0001000/espdma#f,400000/esp#f,800000/sd#0,0
1. c0t3d0 <SEAGATE-ST34520N-1206 cyl 9004 alt 2 hd 4 sec 246>
/iommu#f,e0000000/sbus#f,e0001000/espdma#f,400000/esp#f,800000/sd#3,0
(8) I don’t see any external device attached to the Sybase server. Having said this, there is a backup Sybase server, and the backup Sybase server has an external device attached to it (through a SCSI cable). At this point, I assume the database-files and transaction-files are all stored inside the Sybase server.
By the way, the Sybase server uses this Unix operating system:
SunOS <my-server-name> 5.4 Generic_101945-62 sun4m sparc
And the Sybase version is:
SQL Server/11.0.3.2/P/Sun_svr4/OS 5.4/SWR 7578 Rollup/OPT/Mon Nov 3 22:19:21 PST 1997
By the way, what I have tried so far to repair the database are:
• Tried dbcc checkalloc(, fix). Unfortunately, this command could not fix and could not complete.
• Tried drop-db/add-new-db/restore-db-from-backup. Unfortunately the restore failed to complete.
• Tried fsck-to-fix-the-devices. It could not complete and complained about “MAGIC NUMBER WRONG”.
• Tried Analyze-option-in-format-command-to-repair-Disk-0, and then add-new-db and restore-db-from-backup. This method seemed to work. But after one week or so, I found a table has I/O error. Honestly, I don’t even know if the databases are really in Disk-0 or not.
Please help me to determine which hard drive those database-files and transaction-files are stored, and whether they are in the same hard drive as the Unix operating system.
Thanks in advance.
Jay Chan

Related

How do I synchronize my gluster replicated volumes?

I want to use a gluster replication volume for sqlite db storage
However, when the '.db' file is updated, LINUX does not detect the change, so synchronization between bricks is not possible.
Is there a way to force sync?
It is not synchronized even if you use the gluster volume heal command.
< My Gluster volume status >
[root#be-k8s-worker-1 common]# gluster volume create sync_test replica 2 transport tcp 10.XX.XX.X1:/home/common/sync_test 10.XX.XX.X2:/home/common/sync_test
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
(y/n) y
volume create: sync_test: success: please start the volume to access data
[root#be-k8s-worker-1 common]# gluster volume start sync_test
volume start: sync_test: success
[root#be-k8s-worker-1 sync_test]# gluster volume status sync_test
Status of volume: sync_test
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 10.XX.XX.X1:/home/common/sync_test 49155 0 Y 1142
Brick 10.XX.XX.X2:/home/common/sync_test 49155 0 Y 2134
Self-heal Daemon on localhost N/A N/A Y 2612
Self-heal Daemon on 10.XX.XX.X1 N/A N/A Y 4257
Task Status of Volume sync_test
------------------------------------------------------------------------------
There are no active volume tasks
< Problem Case >
[root#be-k8s-worker-1 sync_test]# ls -al ## client 1
total 20
drwxrwxrwx. 4 root root 122 Oct 17 10:51 .
drwx------. 8 sbyun domain users 4096 Oct 17 10:50 ..
-rw-r--r--. 1 root root 0 Oct 17 10:35 test
-rwxr--r--. 1 sbyun domain users 16384 Oct 17 10:52 test.d
[root#be-k8s-worker-1 sync_test2]# ls -al ## client2
total 20
drwxrwxrwx. 4 root root 122 Oct 17 10:51 .
drwx------. 8 sbyun domain users 4096 Oct 17 10:50 ..
-rw-r--r--. 1 root root 0 Oct 17 10:35 test
-rwxr--r--. 1 sbyun domain users 16384 Oct 17 10:52 test.db
## diff -> No result
[root#be-k8s-worker-1 user]# diff sync_test/test.db sync_test2/test.db
But if I compare same file in windows
compare on windows
My SQLite database was set to WAL mode. So the wal file was being updated and the .db file was not immediately synced.
I turned off WAL Mode with this command:
PRAGMA journal_mode=DELETE;
I confirmed that it was synced immediately.
According to Sqlite document, It doesn't work over a network file system.
All processes using a database must be on the same host computer; WAL does not work over a network filesystem.

Strange "attempt to access beyond end of device", where to look?

I am running a few Xen servers on my company network. Recently, on one of them, I have been trying to rsync (on the Dom0 console) a big server image from another machine, but every time run in to a system crash after somewhere between 30 and 100 GB. The syslog and kernel log show me something like this:
Sep 12 16:41:19 ampxen1 kernel: [ 1730.917516] attempt to access beyond end of device
Sep 12 16:41:19 ampxen1 kernel: [ 1730.917518] dm-1: rw=1, want=8878402463988083936, limit=3759505408
Sep 12 16:41:19 ampxen1 kernel: [ 1730.917520] EXT4-fs warning (device dm-1): ext4_end_bio:323: I/O error 10 writing to inode 33030164 (offset 47354740736 size 5881856 starting block 1109800307998510491)
...continuing with several hundred thousands of similar lines per second, eventually making the machine unreachable. The very high number of the starting block of the EXT4-write operation (that's 10^18 or the exabyte range) is clearly what to look at, but I am unable to find any mention of what could be the cause.
The server is based on ubuntu-18.04.03, standard xen install from the repositories. Storage is two 2TB disks in RAID1, configured as seen below, EXT4 filesystem on the large partition used for our server images. I have checked the disks with smartctl and the file system(s) with e2fsck, for what it's worth. It seems to be a file system issue, but I am wondering whether the xen kernel could be involved. Any ideas of what to look for would be appreciated!
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 500G 0 loop
sda 8:0 0 1,8T 0 disk
├─sda1 8:1 0 476M 0 part /boot/efi
├─sda2 8:2 0 1,8T 0 part
│ └─md0 9:0 0 1,8T 0 raid1
│ ├─ampxen1.0-ampxen1.dom0 253:0 0 23,3G 0 lvm /
│ └─ampxen1.0-ampxen1.vms0 253:1 0 1,8T 0 lvm /srv/vms0
└─sda3 8:3 0 46,5G 0 part [SWAP]
sdb 8:16 0 1,8T 0 disk
├─sdb1 8:17 0 476M 0 part
├─sdb2 8:18 0 1,8T 0 part
│ └─md0 9:0 0 1,8T 0 raid1
│ ├─ampxen1.0-ampxen1.dom0 253:0 0 23,3G 0 lvm /
│ └─ampxen1.0-ampxen1.vms0 253:1 0 1,8T 0 lvm /srv/vms0
└─sdb3 8:19 0 46,5G 0 part [SWAP]
I finally figured out that the problem was something as trivial as a faulty RAM block – running a memtest showed lots of errors on one of the four 16GB blocks. It seems that memory was only maxed out exactly when copying large files, while my existing virtual servers on the machine were running just fine at all other times.

ipvsadm -L -n suddenly showing no active connections

I have a very odd problem in a proxy cluster of four Squid proxies:
One of the machine is the master. The mater is running ldirectord which is checking the availability of all four machines, distributing new client connections.
All over a sudden, after years of operation I'm encountering this problem:
1) The machine serving the master role is not being assigned new connections, old connections are served until a new proxy is assigned to the clients.
2) The other machines are still processing requests, taking over the clients from the master (so far, so good)
3) "ipvsadm -L -n" shows ever-decreasing ActiveConn and InActConn values.
Once I migrate the master role to another machine, "ipvsadm -L -n" is showing lots of active and inactive connections, until after about an hour the same thing happens on the new master.
Datapoint: This happened again this afternoon, and now "ipvsadm -L -n" shows:
TCP 141.42.1.215:8080 wlc persistent 1800
-> 141.42.1.216:8080 Route 1 98 0
-> 141.42.1.217:8080 Route 1 135 0
-> 141.42.1.218:8080 Route 1 1 0
-> 141.42.1.219:8080 Route 1 2 0
No change in the numbers quite some time now.
Some more stats (ipvsadm -L --stats -n):
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Conns InPkts OutPkts InBytes OutBytes
-> RemoteAddress:Port
TCP 141.42.1.215:8080 1990351 87945600 0 13781M 0
-> 141.42.1.216:8080 561980 21850870 0 2828M 0
-> 141.42.1.217:8080 467499 23407969 0 3960M 0
-> 141.42.1.218:8080 439794 19364749 0 2659M 0
-> 141.42.1.219:8080 521378 23340673 0 4335M 0
Value for "Conns" is constant now for all realservers and the virtual server now. Traffic is still flowing (InPkts increasing).
I examined the output of "ipvsadm -L -n -c" and found:
25 FIN_WAIT
534 NONE
977 ESTABLISHED
Then I waited a minute and got:
21 FIN_WAIT
515 NONE
939 ESTABLISHED
It turns out that a local bird installation was injecting router for the IP of the virtual server and thus taking precedence over ARP.

I want all the records till the end of the file which are greater than given date in unix

I want all the records till the end of the log file which are greater than given date...
suppose given date is Mon Dec 14 22:00:03 2015 then i want all the lines in the log file till end when first occurrence of greater than this date is found.
ex: awk ' { if ( $0 > "Tue Dec 15 08:00:00 2015") print } ' file.out
only prints lines greater than date but i want all lines after match till end of file..
please note that
1. i can not use regex since i dont know if entry is present in the log file for that date i.e H:M:S so i have to use greater than date functionality.
Date is not present on every line of the lg file. it is present in between but on new line
please help
sample log file:::
Mon Dec 14 02:00:00 2015
Clearing Resource Manager plan via parameter
Mon Dec 14 07:02:57 2015
***********************************************************************
Fatal NI connect error 12504, connecting to:
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=)(CID=(PROGRAM=oracle)(HOST=ltest8)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.115)(PORT=1521)))
VERSION INFORMATION:
TNS for Linux: Version 11.1.0.6.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.1.0.6.0 - Production
Time: 14-DEC-2015 07:02:57
Tracing not turned on.
Tns error struct:
ns main err code: 12564
TNS-12564: TNS:connection refused
ns secondary err code: 0
nt main err code: 0
nt secondary err code: 0
nt OS err code: 0
Mon Dec 14 08:01:37 2015
***********************************************************************
Fatal NI connect error 12504, connecting to:
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=)(CID=(PROGRAM=oracle)(HOST=ltest8)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.115)(PORT=1521)))
VERSION INFORMATION:
TNS for Linux: Version 11.1.0.6.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.1.0.6.0 - Production
Time: 14-DEC-2015 08:01:37
Tracing not turned on.
Tns error struct:
ns main err code: 12564
TNS-12564: TNS:connection refused
ns secondary err code: 0
nt main err code: 0
nt secondary err code: 0
nt OS err code: 0
Mon Dec 14 08:54:33 2015
***********************************************************************
Fatal NI connect error 12504, connecting to:
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=)(CID=(PROGRAM=oracle)(HOST=ltest8)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.115)(PORT=1521)))
VERSION INFORMATION:
TNS for Linux: Version 11.1.0.6.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.1.0.6.0 - Production
Time: 14-DEC-2015 08:54:33
Tracing not turned on.
Tns error struct:
ns main err code: 12564
TNS-12564: TNS:connection refused
ns secondary err code: 0
nt main err code: 0
nt secondary err code: 0
nt OS err code: 0
Mon Dec 14 08:57:18 2015
Thread 1 advanced to log sequence 232
Current log# 2 seq# 232 mem# 0: /u04/app/oracle/oradata/kcom/redo02.log
Mon Dec 14 08:57:19 2015
Errors in file /u01/app/oracle/diag/rdbms/kcom/Rialto/trace/Rialto_arc3_3953.trc:
ORA-19815: WARNING: db_recovery_file_dest_size of 268435456 bytes is 100.00% used, and has 0 remaining bytes available.
************************************************************************
You have following choices to free up space from flash recovery area:
1. Consider changing RMAN RETENTION POLICY. If you are using Data Guard,
then consider changing RMAN ARCHIVELOG DELETION POLICY.
2. Back up files to tertiary device such as tape using RMAN
BACKUP RECOVERY AREA command.
3. Add disk space and increase db_recovery_file_dest_size parameter to
reflect the new space.
4. Delete unnecessary files using RMAN DELETE command. If an operating
system command was used to delete files, then use RMAN CROSSCHECK and
DELETE EXPIRED commands.
************************************************************************
Mon Dec 14 09:17:45 2015
***********************************************************************
Fatal NI connect error 12504, connecting to:
(DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=)(CID=(PROGRAM=oracle)(HOST=ltest8)(USER=oracle)))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.115)(PORT=1521)))
VERSION INFORMATION:
TNS for Linux: Version 11.1.0.6.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.1.0.6.0 - Production
Time: 14-DEC-2015 09:17:45
Tracing not turned on.
Tns error struct:
ns main err code: 12564
TNS-12564: TNS:connection refused
ns secondary err code: 0
nt main err code: 0
nt secondary err code: 0
nt OS err code: 0
Mon Dec 14 10:25:24 2015
QKSRC: ViewText[ecode=942] = SELECT /*+ result_cache */ ID, 'PLUGIN_'||NAME AS NAME, STANDARD_ATTRIBUTES, SQL_MIN_COLUMN_COUNT, NVL(SQL_MAX_COLUMN_COUNT, 999) AS SQL_MAX_COLUMN_COUNT, SQL_EXAMPLES FROM WWV_FLOW_PLUGINS WHERE FLOW_ID = :B2 AND PLUGIN_TYPE = :B1
Mon Dec 14 14:31:14 2015
Thread 1 advanced to log sequence 233
Current log# 3 seq# 233 mem# 0: /u04/app/oracle/oradata/kcom/redo03.log
Mon Dec 14 14:31:15 2015
Errors in file /u01/app/oracle/diag/rdbms/kcom/Rialto/trace/Rialto_arc0_3947.trc:
ORA-19815: WARNING: db_recovery_file_dest_size of 268435456 bytes is 100.00% used, and has 0 remaining bytes available.
************************************************************************
You have following choices to free up space from flash recovery area:
1. Consider changing RMAN RETENTION POLICY. If you are using Data Guard,
then consider changing RMAN ARCHIVELOG DELETION POLICY.
2. Back up files to tertiary device such as tape using RMAN
BACKUP RECOVERY AREA command.
3. Add disk space and increase db_recovery_file_dest_size parameter to
reflect the new space.
4. Delete unnecessary files using RMAN DELETE command. If an operating
system command was used to delete files, then use RMAN CROSSCHECK and
DELETE EXPIRED commands.
************************************************************************
Mon Dec 14 20:28:23 2015
Thread 1 advanced to log sequence 234
Current log# 4 seq# 234 mem# 0: /u04/app/oracle/oradata/kcom/redo04.log
Mon Dec 14 20:28:24 2015
Errors in file /u01/app/oracle/diag/rdbms/kcom/Rialto/trace/Rialto_arc1_3949.trc:
ORA-19815: WARNING: db_recovery_file_dest_size of 268435456 bytes is 100.00% used, and has 0 remaining bytes available.
************************************************************************
You have following choices to free up space from flash recovery area:
1. Consider changing RMAN RETENTION POLICY. If you are using Data Guard,
then consider changing RMAN ARCHIVELOG DELETION POLICY.
2. Back up files to tertiary device such as tape using RMAN
BACKUP RECOVERY AREA command.
3. Add disk space and increase db_recovery_file_dest_size parameter to
reflect the new space.
4. Delete unnecessary files using RMAN DELETE command. If an operating
system command was used to delete files, then use RMAN CROSSCHECK and
DELETE EXPIRED commands.
************************************************************************
Mon Dec 14 22:00:00 2015
Setting Resource Manager plan SCHEDULER[0x2C09]:DEFAULT_MAINTENANCE_PLAN via scheduler window
Setting Resource Manager plan DEFAULT_MAINTENANCE_PLAN via parameter
Mon Dec 14 22:00:03 2015
Mon Dec 14 22:00:03 2015
Logminer Bld: Lockdown Complete. DB_TXN_SCN is UnwindToSCN (LockdownSCN) is 18957974
Tue Dec 15 02:00:00 2015
Clearing Resource Manager plan via parameter
Tue Dec 15 02:00:02 2015
Simple python 2 parser with hardcoded datetime and input file. Tested with your given log. It has plenty of room for optimization but it works as a start I guess.
#/usr/bin/env python
import re
from datetime import datetime
# year, month, day, hour, minute
filter_from = datetime(2015, 12, 13, 23, 40)
with open("tmp.log") as log:
match = False
for line in log:
if (match):
print line
else:
candidate = re.match(r'[A-Z][a-z][a-z] [A-Z][a-z][a-z] \d\d \d\d:\d\d:\d\d \d\d\d\d', line)
if candidate:
parsed_date = datetime.strptime(candidate.group(0), "%a %b %d %X %Y")
if parsed_date > filter_from:
match = True
print line

Can i decide how much memory to allocate in LSF queue

Is there any option to decide how much memory I can allocate in LSF?
I tried
bsub -R "rusage[mem=10000]" sleep 1000s
But when i checked resource using "bjobs -l "
I get this:
Job <203180>, User <xxxxx>, Project <default>, Status <RUN>, Queue <medium>,
Job Priority <50>, Command <sleep 1000s>
Thu Apr 12 09:49:56: Submitted from host <xxxx>, CWD <xx>, Requested Resources <rusa
ge[mem=10000]>;
Thu Apr 12 09:49:58: Started on <xxxx>, Execution Home <xxxx>, E
xecution CWD <xxxxx>;
Thu Apr 12 09:49:58: Resource usage collected.
MEM: 3 Mbytes; SWAP: 16 Mbytes; NTHREAD: 1
PGID: 28231; PIDs: 28231
Where am I wrong?
bsub -R "rusage[mem=10000]": will initially reserve 10000 MBytes of memory.
Whereas:
"MEM: 3 Mbytes" is the total resident memory usage of all currently running processes in your job.
"SWAP: 16 Mbytes" is the total virtual memory usage of all currently running processes in your job.
The values "3 Mbytes" and "16 Mbytes" may change during the runtime.
In my system we use -M, say bsub -M 1 to request 1 G of memory limit, the job is killed if it goes above that limit.

Resources