In configuring your zone, just "add net" for each device like below (
could certainly access multiple disks this way but there are still uncleared
issues) :
zonecfg -z myzone
create
set zonepath=/zfspool/fs/myzone
set autoboot=true
############### see below ###########
add net
set address=129.148.20.2
set physical=ipge1
end
add net
set address=129.148.30.2
set physical=ipge2
end
################ see above ##############
....
verify
commit
Friday, October 27, 2006
Tuesday, October 24, 2006
Sun Fire System Auto Reboot
on OBP, use setenv
ok> setenv auto-boot? true
on Solaris use eeprom(1)
% eeprom auto-boot?=true
ok> setenv auto-boot? true
on Solaris use eeprom(1)
% eeprom auto-boot?=true
Saturday, October 14, 2006
ZFS Ignore fsflush
ZFS ignores the fsflush. Here's a snippet of the code in zfs_sync():
/*
* SYNC_ATTR is used by fsflush() to force old filesystems like UFS
* to sync metadata, which they would otherwise cache indefinitely.
* Semantically, the only requirement is that the sync be initiated.
* The DMU syncs out txgs frequently, so there's nothing to do.
*/
if (flag & SYNC_ATTR)
return (0);
However, for a user initiated sync(1m) and sync(2) ZFS does force
all outstanding data/transactions synchronously to disk .
This goes beyond the requirement of sync(2) which says IO is inititiated
but not waited on (ie asynchronous).
/*
* SYNC_ATTR is used by fsflush() to force old filesystems like UFS
* to sync metadata, which they would otherwise cache indefinitely.
* Semantically, the only requirement is that the sync be initiated.
* The DMU syncs out txgs frequently, so there's nothing to do.
*/
if (flag & SYNC_ATTR)
return (0);
However, for a user initiated sync(1m) and sync(2) ZFS does force
all outstanding data/transactions synchronously to disk .
This goes beyond the requirement of sync(2) which says IO is inititiated
but not waited on (ie asynchronous).
Wednesday, October 11, 2006
Mount ISO file on Solaris
- make the ISO image file available as a block device with
lofiadm(1M), e.g.
# lofiadm -a /var/tmp/sol-10-u1-companion-ga.iso
/dev/lofi/1
- mount the block device, e.g.
# mount -r -F hsfs /dev/lofi/1 /mnt
- when you're done, umount the file and delete the device with
# lofiadm -d /dev/lofi/1
lofiadm(1M), e.g.
# lofiadm -a /var/tmp/sol-10-u1-companion-ga.iso
/dev/lofi/1
- mount the block device, e.g.
# mount -r -F hsfs /dev/lofi/1 /mnt
- when you're done, umount the file and delete the device with
# lofiadm -d /dev/lofi/1
JVM tunnables for JVM on x410
Java HotSpot(TM) 32-bit Server VM on Windows, version 1.5.0_06
http://www.spec.org/jbb2005/results/res2006q1/jbb2005-20060117-00061.txt
Java HotSpot(TM) 32-bit Server VM on Solaris, version 1.5.0_08
http://www.spec.org/jbb2005/results/res2006q2/jbb2005-20060512-00112.txt
http://www.spec.org/jbb2005/results/res2006q1/jbb2005-20060117-00061.txt
Java HotSpot(TM) 32-bit Server VM on Solaris, version 1.5.0_08
http://www.spec.org/jbb2005/results/res2006q2/jbb2005-20060512-00112.txt
Monday, October 09, 2006
ZFS on Solaris 11
Find disk and slice
format --> select disk --> partition ---> print
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0
/pci@0,600000/pci@1/pci@8/pci@0/scsi@1/sd@0,0
1. c0t1d0
/pci@0,600000/pci@1/pci@8/pci@0/scsi@1/sd@1,0
Specify disk (enter its number): 1
selecting c0t1d0
[disk formatted]
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volname - set 8-character volume name
! - execute , then return
quit
format> partition
PARTITION MENU:
0 - change `0' partition
1 - change `1' partition
2 - change `2' partition
3 - change `3' partition
4 - change `4' partition
5 - change `5' partition
6 - change `6' partition
7 - change `7' partition
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
! - execute , then return
quit
partition> print
Current partition table (original):
Total disk cylinders available: 14087 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 root wm 2 - 1135 5.50GB (1134/0/0) 11539584
1 swap wu 1155 - 2309 5.60GB (1155/0/0) 11753280
2 backup wm 0 - 14086 68.35GB (14087/0/0) 143349312
3 unassigned wm 2310 - 3464 5.60GB (1155/0/0) 11753280
4 unassigned wm 3465 - 4619 5.60GB (1155/0/0) 11753280
5 unassigned wm 4620 - 5774 5.60GB (1155/0/0) 11753280
6 unassigned wm 5775 - 12931 34.73GB (7157/0/0) 72829632
7 home wm 12932 - 14086 5.60GB (1155/0/0) 11753280
(2) use c0t1d0s6 for zfs
(3) create v device pool
# zpool create ktspool c0t1d0s6
(4) list the pool
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
ktspool 34,5G 33,5K 34,5G 0% ONLINE -
(5) check pool status
# zpool status
pool: ktspool
state: ONLINE
scrub: none requested
(6) ktspool file system was created. verify file system
# df -kh
Filesystem size used avail capacity Mounted on
/dev/dsk/c0t0d0s0 9,8G 3,6G 6,2G 37% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 14G 1,1M 14G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
fd 0K 0K 0K 0% /dev/fd
swap 14G 8K 14G 1% /tmp
swap 14G 48K 14G 1% /var/run
/dev/dsk/c0t0d0s7 50G 56M 49G 1% /export/home
ktspool 34G 9K 34G 1% /ktspool
(7) create a new file system as ktspool/kts
zfs create ktspool/kts
(8) verify the file system creation
# df -kh
Filesystem size used avail capacity Mounted on
/dev/dsk/c0t0d0s0 9,8G 3,6G 6,2G 37% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 14G 1,1M 14G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
fd 0K 0K 0K 0% /dev/fd
swap 14G 8K 14G 1% /tmp
swap 14G 48K 14G 1% /var/run
/dev/dsk/c0t0d0s7 50G 56M 49G 1% /export/home
ktspool 34G 9K 34G 1% /ktspool
ktspool/kts 34G 9K 34G 1% /ktspool/kts
(9) change the mount point of the zfs file system to /kabirazfs
zfs set mountpoint=/kabirazfs ktspool/kts
(10) verify the new mounted point
# df -kh
Filesystem size used avail capacity Mounted on
/dev/dsk/c0t0d0s0 9,8G 3,6G 6,2G 37% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 14G 1,1M 14G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
fd 0K 0K 0K 0% /dev/fd
swap 14G 8K 14G 1% /tmp
swap 14G 48K 14G 1% /var/run
/dev/dsk/c0t0d0s7 50G 56M 49G 1% /export/home
ktspool 34G 9K 34G 1% /ktspool
ktspool/kts 34G 9K 34G 1% /kabirazfs
Also can see /kabirazfs is created under "/"
(11) zpool iostat -x 5
(12) check zfs properties setting. such as compression is disabled
# zfs get all ktspool/kts
NAME PROPERTY VALUE SOURCE
ktspool/kts type filesystem -
ktspool/kts creation lun oct 9 19:08 2006 -
ktspool/kts used 9,50K -
ktspool/kts available 34,2G -
ktspool/kts referenced 9,50K -
ktspool/kts compressratio 1.00x -
ktspool/kts mounted yes -
ktspool/kts quota none default
ktspool/kts reservation none default
ktspool/kts recordsize 128K default
ktspool/kts mountpoint /kabirazfs local
ktspool/kts sharenfs off default
ktspool/kts checksum on default
ktspool/kts compression off default
ktspool/kts atime on default
ktspool/kts devices on default
ktspool/kts exec on default
ktspool/kts setuid on default
ktspool/kts readonly off default
ktspool/kts zoned off default
ktspool/kts snapdir hidden default
ktspool/kts aclmode groupmask default
ktspool/kts aclinherit secure default
format --> select disk --> partition ---> print
# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0t0d0
/pci@0,600000/pci@1/pci@8/pci@0/scsi@1/sd@0,0
1. c0t1d0
/pci@0,600000/pci@1/pci@8/pci@0/scsi@1/sd@1,0
Specify disk (enter its number): 1
selecting c0t1d0
[disk formatted]
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
repair - repair a defective sector
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
save - save new disk/partition definitions
inquiry - show vendor, product and revision
volname - set 8-character volume name
!
quit
format> partition
PARTITION MENU:
0 - change `0' partition
1 - change `1' partition
2 - change `2' partition
3 - change `3' partition
4 - change `4' partition
5 - change `5' partition
6 - change `6' partition
7 - change `7' partition
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
!
quit
partition> print
Current partition table (original):
Total disk cylinders available: 14087 + 2 (reserved cylinders)
Part Tag Flag Cylinders Size Blocks
0 root wm 2 - 1135 5.50GB (1134/0/0) 11539584
1 swap wu 1155 - 2309 5.60GB (1155/0/0) 11753280
2 backup wm 0 - 14086 68.35GB (14087/0/0) 143349312
3 unassigned wm 2310 - 3464 5.60GB (1155/0/0) 11753280
4 unassigned wm 3465 - 4619 5.60GB (1155/0/0) 11753280
5 unassigned wm 4620 - 5774 5.60GB (1155/0/0) 11753280
6 unassigned wm 5775 - 12931 34.73GB (7157/0/0) 72829632
7 home wm 12932 - 14086 5.60GB (1155/0/0) 11753280
(2) use c0t1d0s6 for zfs
(3) create v device pool
# zpool create ktspool c0t1d0s6
(4) list the pool
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
ktspool 34,5G 33,5K 34,5G 0% ONLINE -
(5) check pool status
# zpool status
pool: ktspool
state: ONLINE
scrub: none requested
(6) ktspool file system was created. verify file system
# df -kh
Filesystem size used avail capacity Mounted on
/dev/dsk/c0t0d0s0 9,8G 3,6G 6,2G 37% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 14G 1,1M 14G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
fd 0K 0K 0K 0% /dev/fd
swap 14G 8K 14G 1% /tmp
swap 14G 48K 14G 1% /var/run
/dev/dsk/c0t0d0s7 50G 56M 49G 1% /export/home
ktspool 34G 9K 34G 1% /ktspool
(7) create a new file system as ktspool/kts
zfs create ktspool/kts
(8) verify the file system creation
# df -kh
Filesystem size used avail capacity Mounted on
/dev/dsk/c0t0d0s0 9,8G 3,6G 6,2G 37% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 14G 1,1M 14G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
fd 0K 0K 0K 0% /dev/fd
swap 14G 8K 14G 1% /tmp
swap 14G 48K 14G 1% /var/run
/dev/dsk/c0t0d0s7 50G 56M 49G 1% /export/home
ktspool 34G 9K 34G 1% /ktspool
ktspool/kts 34G 9K 34G 1% /ktspool/kts
(9) change the mount point of the zfs file system to /kabirazfs
zfs set mountpoint=/kabirazfs ktspool/kts
(10) verify the new mounted point
# df -kh
Filesystem size used avail capacity Mounted on
/dev/dsk/c0t0d0s0 9,8G 3,6G 6,2G 37% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 14G 1,1M 14G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
fd 0K 0K 0K 0% /dev/fd
swap 14G 8K 14G 1% /tmp
swap 14G 48K 14G 1% /var/run
/dev/dsk/c0t0d0s7 50G 56M 49G 1% /export/home
ktspool 34G 9K 34G 1% /ktspool
ktspool/kts 34G 9K 34G 1% /kabirazfs
Also can see /kabirazfs is created under "/"
(11) zpool iostat -x 5
(12) check zfs properties setting. such as compression is disabled
# zfs get all ktspool/kts
NAME PROPERTY VALUE SOURCE
ktspool/kts type filesystem -
ktspool/kts creation lun oct 9 19:08 2006 -
ktspool/kts used 9,50K -
ktspool/kts available 34,2G -
ktspool/kts referenced 9,50K -
ktspool/kts compressratio 1.00x -
ktspool/kts mounted yes -
ktspool/kts quota none default
ktspool/kts reservation none default
ktspool/kts recordsize 128K default
ktspool/kts mountpoint /kabirazfs local
ktspool/kts sharenfs off default
ktspool/kts checksum on default
ktspool/kts compression off default
ktspool/kts atime on default
ktspool/kts devices on default
ktspool/kts exec on default
ktspool/kts setuid on default
ktspool/kts readonly off default
ktspool/kts zoned off default
ktspool/kts snapdir hidden default
ktspool/kts aclmode groupmask default
ktspool/kts aclinherit secure default
Friday, October 06, 2006
Classic Relational Algebra Algorithm
The algebra on sets of tuples or relations could be used to express typical queries about those relations (1) Union (2) set difference (3) Cartesian Product (4) selection
(5) projection (6) aggregation (7) renaming
set operation(union, intersection, difference), selection, projection, Cartesian product, natural join, theta-join, renaming, duplicated elimination, aggregation, grouping, sorting, extended projection,outerjoin (naturla, left, right)
Intersection, theta join, natural join are dependent operations.
Union, differences, production, selection, projection, renaming are independent operations
(5) projection (6) aggregation (7) renaming
set operation(union, intersection, difference), selection, projection, Cartesian product, natural join, theta-join, renaming, duplicated elimination, aggregation, grouping, sorting, extended projection,outerjoin (naturla, left, right)
Intersection, theta join, natural join are dependent operations.
Union, differences, production, selection, projection, renaming are independent operations
Tuesday, October 03, 2006
NFS Service in NGZ
One cannot do NFS in NGZ but Solution to get NFS Service in NGZ
One can make the GZ an NFS sever and just use loopback from GZ to NGZ instead, this can simulate NFS. Since this is a loopback it should be faster and more efficient than using NFS. Just something to ponder? It might work. Now if your requirement is to put the NFS server in a NGZ, you cannot do that today. Maybe it will be addressed in an update depending on demand?
One can make the GZ an NFS sever and just use loopback from GZ to NGZ instead, this can simulate NFS. Since this is a loopback it should be faster and more efficient than using NFS. Just something to ponder? It might work. Now if your requirement is to put the NFS server in a NGZ, you cannot do that today. Maybe it will be addressed in an update depending on demand?
Monday, October 02, 2006
Denial of Service on X.509
(1)
Vulnerability Note VU#423396
X.509 certificate verification may be vulnerable to resource exhaustion:
http://www.kb.cert.org/vuls/id/423396
(2)
NISCC Vulnerability Advisory
729618/NISCC/PARASITIC-KEYS
Denial-of-Service Condition Affecting X.509 Certificates Verification:
http://www.niscc.gov.uk/niscc/docs/re-20060928-00661.pdf?lang=en
(3)
After x unsuccessful logins, it is possible in till deactivate the account. B
But is it possible to send an email to some Administrator that the account was deactivated.
(4)
DS is using NSS library (Mozilla) which is listed as
not vulnerable in the 729618/NISCC/PARASITIC-KEYS document
Vulnerability Note VU#423396
X.509 certificate verification may be vulnerable to resource exhaustion:
http://www.kb.cert.org/vuls/id/423396
(2)
NISCC Vulnerability Advisory
729618/NISCC/PARASITIC-KEYS
Denial-of-Service Condition Affecting X.509 Certificates Verification:
http://www.niscc.gov.uk/niscc/docs/re-20060928-00661.pdf?lang=en
(3)
After x unsuccessful logins, it is possible in till deactivate the account. B
But is it possible to send an email to some Administrator that the account was deactivated.
(4)
DS is using NSS library (Mozilla) which is listed as
not vulnerable in the 729618/NISCC/PARASITIC-KEYS document
Subscribe to:
Posts (Atom)