Load Balancer will use tcp_connect to ping downline systems it is protecting, and since this is not a protocol understood by the app, you get that error. It also winds up leaving an orphaned FD *each* time.
If you do find a LB is the source, the solution is to reset the LB to use an acceptable protocol (http for web, ldap for DS, etc.). Most all the LB's have ready-to-use scripted options in their config-options.
Although its not a true fix, another option is to reset the idletimeout onthe DS to soemthing like 900/1800s so it can recoup the wasted FD's periodically. This is a fine temporary fix, sometimes cu's leave that situation in that state for years. But the true fix is to re-config the LB to use the proper protocol .
Wednesday, October 24, 2007
cron script from open solaris
For every command in the script?
http://cvs.opensolaris.org/source/xref/jds/spec-files/trunk/cron-script.sh
See if a single command experiment working first
> Where does the redirection go? Inside the quoted command or for zlogin?
> zlogin -l gbuild big-zone "the_command > zlogin -l gbuild big-zone "the_command"
DS DB index out of sync-poor performance
DS is observed slower search response time on Solaris 10U3 compared to Windows. The cache size is the same on the two boxes, even though it should be bigger on solaris (windows is 32bit, solaris is 64).
This search for example shows different etimes consistently :
On Windows the search took 0.079000 seconds :
[19/Oct/2007:15:33:41 +0200] conn=2062529 op=1 msgId=2 - SRCH base="ou=qp_na02,ou=apps,ou=internal,o=ericsson" scope=2 filter="(&(objectClass=groupOfUniqueNames)(uniqueMember=UID=XBMVIVI,OU=Partners,OU=External,O=ericsson))" attrs="cn"
[19/Oct/2007:15:33:42 +0200] conn=2062529 op=1 msgId=2 - RESULT err=0 tag=101 nentries=114 etime=0.079000
however this same search took 0.320350 seconds on solaris :
[19/Oct/2007:17:02:30 +0200] conn=460 op=1 msgId=2 - SRCH base="ou=qp_na02,ou=apps,ou=internal,o=ericsson" scope=2 filter="(&(objectClass=groupOfUniqueNames)(uniqueMember=UID=XBMVIVI,OU=Partners,OU=External,O=ericsson))" attrs="cn"
[19/Oct/2007:17:02:30 +0200] conn=460 op=1 msgId=2 - RESULT err=0 tag=101 nentries=114 etime=0.320350
In the solaris errors logs, i see lines like this "candidate not found" :
[19/Oct/2007:17:02:30 +0200] - INFORMATION - conn=-1 op=-1 msgId=-1 - SRCH base="ou=qp_na02,ou=apps,ou=internal,o=ericsson" scope=2 deref=0 sizelimit=0 timelimit=0 attrsonly=0 filter="(&(objectClass=groupOfUniqueNames)(uniqueMember=UID=XBMVIVI,OU=Partners,OU=External,O=ericsson))" attrs="cn"
[19/Oct/2007:17:02:30 +0200] - INFORMATION - conn=-1 op=-1 msgId=-1 - mapping tree selected backend : unix
[19/Oct/2007:17:02:30 +0200] - INFORMATION - conn=-1 op=-1 msgId=-1 - mapping tree release backend : unix
[19/Oct/2007:17:02:30 +0200] - INFORMATION - conn=-1 op=-1 msgId=-1 - mapping tree selected backend : userRoot
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - be: 'o=ericsson' indextype: "eq" indexmask: 0x2
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - be: 'o=ericsson' indextype: "eq" indexmask: 0x2
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - be: 'o=ericsson' indextype: "eq" indexmask: 0x2
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - be: 'o=ericsson' indextype: "eq" indexmask: 0x2
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - candidate 306256 not found
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - candidate 307389 not found
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - candidate 309528 not found
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - candidate 309532 not found
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - candidate 309538 not found
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - candidate 309542 not found
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - candidate 309726 not found
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - candidate 309866 not found
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - candidate 309877 not found
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - candidate 309878 not found
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - candidate 309881 not found
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - candidate 309898 not found
[19/Oct/2007:17:02:30 +0200] - INFORMATION - conn=-1 op=-1 msgId=-1 - mapping tree release backend : userRoot
it looks like the index is out of sync.
Zone attaching to shared storage
Using "attach -F" (ie capital F) you can attach a shared storage
to a different zone even it is not de-attached with it's original
zone, if you previously created the zone configuration via "zonecfg -z
..." like on the original host.
Of course you must make sure that only one node is really booting the
zone at a time.
Thursday, October 11, 2007
Scheme Setup
(1) install Petite Chez Scheme
(2) install eopl source code
(3) change dir
(cd "/home/test/eopl/intercept")
(3) init code
(load "chez-init.scm")
(2) install eopl source code
(3) change dir
(cd "/home/test/eopl/intercept")
(3) init code
(load "chez-init.scm")
Monday, October 08, 2007
DB cache size for large DB on DS
The best results with a maximum of about 50 GB in database cache because of checkpoints. That is a concern here. But response times is over 1 second. It is caused by check pointing, reducing the db cache size may help. A very large db, and not a lot of memory.
(1) Set both the entry cache and db cache to 100 mb each offered the best performance - allowing the fs cache to do everything. This was on Solaris 10 with the latest updates for ZFS.
(2) modified the file system partitions to force direct io on the ones we didn't want being cached - this way we reserved more of the fs cache for LDAP.
(3) Like if your logs are on a separate FS then you can exclude that mount, etc. For the partition/disk that holds the database, set: "noatime" & "logging", and if "/data" is the disk that holds the database, set: tunefs -e 2097152 /data
(1) Set both the entry cache and db cache to 100 mb each offered the best performance - allowing the fs cache to do everything. This was on Solaris 10 with the latest updates for ZFS.
(2) modified the file system partitions to force direct io on the ones we didn't want being cached - this way we reserved more of the fs cache for LDAP.
(3) Like if your logs are on a separate FS then you can exclude that mount, etc. For the partition/disk that holds the database, set: "noatime" & "logging", and if "/data" is the disk that holds the database, set: tunefs -e 2097152 /data
Saturday, October 06, 2007
Host name vs Security Compliance
(1) update with latest security patches and fixes for OS, Tools, dependencies
(2) ensure no hostname is embeded in any UI component
(3) no host name in URL or CLI
(2) ensure no hostname is embeded in any UI component
(3) no host name in URL or CLI
Wednesday, October 03, 2007
News Analysis
I'm doing a Master's Thesis project based on scanning web-based news
topics and analyzing their frequency. I have a table containing rows that
represent groups of news stories about a particular topic
topics and analyzing their frequency. I have a table containing rows that
represent groups of news stories about a particular topic
Adding Missing Values for Random Variables
Missing values for real-valued attributes could be implemented via Laplacian smoothing, by adding one initial default value to every attribute, either the mean of available observations for the attribute, or the mean of observations for some other similar subset of attributes, including possibly the set of all attributes.
Subscribe to:
Posts (Atom)