Wednesday, October 24, 2007

LB and DS FD overflow

Load Balancer will use tcp_connect to ping downline systems it is protecting, and since this is not a protocol understood by the app, you get that error. It also winds up leaving an orphaned FD *each* time.

If you do find a LB is the source, the solution is to reset the LB to use an acceptable protocol (http for web, ldap for DS, etc.). Most all the LB's have ready-to-use scripted options in their config-options.

Although its not a true fix, another option is to reset the idletimeout onthe DS to soemthing like 900/1800s so it can recoup the wasted FD's periodically. This is a fine temporary fix, sometimes cu's leave that situation in that state for years. But the true fix is to re-config the LB to use the proper protocol .

cron script from open solaris


For every command in the script?

http://cvs.opensolaris.org/source/xref/jds/spec-files/trunk/cron-script.sh
See if a single command experiment working first

> Where does the redirection go? Inside the quoted command or for zlogin?
> zlogin -l gbuild big-zone "the_command > zlogin -l gbuild big-zone "the_command"

DS DB index out of sync-poor performance

DS is observed slower search response time on Solaris 10U3 compared to Windows. The cache size is the same on the two boxes, even though it should be bigger on solaris (windows is 32bit, solaris is 64).

This search for example shows different etimes consistently :

On Windows the search took 0.079000 seconds :

[19/Oct/2007:15:33:41 +0200] conn=2062529 op=1 msgId=2 - SRCH base="ou=qp_na02,ou=apps,ou=internal,o=ericsson" scope=2 filter="(&(objectClass=groupOfUniqueNames)(uniqueMember=UID=XBMVIVI,OU=Partners,OU=External,O=ericsson))" attrs="cn"
[19/Oct/2007:15:33:42 +0200] conn=2062529 op=1 msgId=2 - RESULT err=0 tag=101 nentries=114 etime=0.079000

however this same search took 0.320350 seconds on solaris :

[19/Oct/2007:17:02:30 +0200] conn=460 op=1 msgId=2 - SRCH base="ou=qp_na02,ou=apps,ou=internal,o=ericsson" scope=2 filter="(&(objectClass=groupOfUniqueNames)(uniqueMember=UID=XBMVIVI,OU=Partners,OU=External,O=ericsson))" attrs="cn"
[19/Oct/2007:17:02:30 +0200] conn=460 op=1 msgId=2 - RESULT err=0 tag=101 nentries=114 etime=0.320350


In the solaris errors logs, i see lines like this "candidate not found" :

[19/Oct/2007:17:02:30 +0200] - INFORMATION - conn=-1 op=-1 msgId=-1 - SRCH base="ou=qp_na02,ou=apps,ou=internal,o=ericsson" scope=2 deref=0 sizelimit=0 timelimit=0 attrsonly=0 filter="(&(objectClass=groupOfUniqueNames)(uniqueMember=UID=XBMVIVI,OU=Partners,OU=External,O=ericsson))" attrs="cn"
[19/Oct/2007:17:02:30 +0200] - INFORMATION - conn=-1 op=-1 msgId=-1 - mapping tree selected backend : unix
[19/Oct/2007:17:02:30 +0200] - INFORMATION - conn=-1 op=-1 msgId=-1 - mapping tree release backend : unix
[19/Oct/2007:17:02:30 +0200] - INFORMATION - conn=-1 op=-1 msgId=-1 - mapping tree selected backend : userRoot
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - be: 'o=ericsson' indextype: "eq" indexmask: 0x2
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - be: 'o=ericsson' indextype: "eq" indexmask: 0x2
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - be: 'o=ericsson' indextype: "eq" indexmask: 0x2
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - be: 'o=ericsson' indextype: "eq" indexmask: 0x2
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - candidate 306256 not found
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - candidate 307389 not found
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - candidate 309528 not found
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - candidate 309532 not found
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - candidate 309538 not found
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - candidate 309542 not found
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - candidate 309726 not found
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - candidate 309866 not found
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - candidate 309877 not found
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - candidate 309878 not found
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - candidate 309881 not found
[19/Oct/2007:17:02:30 +0200] - DEBUG - conn=-1 op=-1 msgId=-1 - candidate 309898 not found
[19/Oct/2007:17:02:30 +0200] - INFORMATION - conn=-1 op=-1 msgId=-1 - mapping tree release backend : userRoot


it looks like the index is out of sync.

Zone attaching to shared storage

Using "attach -F" (ie capital F) you can attach a shared storage
to a different zone even it is not de-attached with it's original
zone, if you previously created the zone configuration via "zonecfg -z
..." like on the original host.

Of course you must make sure that only one node is really booting the
zone at a time.

Thursday, October 11, 2007

Scheme Setup

(1) install Petite Chez Scheme
(2) install eopl source code
(3) change dir

(cd "/home/test/eopl/intercept")
(3) init code

(load "chez-init.scm")

Monday, October 08, 2007

DB cache size for large DB on DS

The best results with a maximum of about 50 GB in database cache because of checkpoints. That is a concern here. But response times is over 1 second. It is caused by check pointing, reducing the db cache size may help. A very large db, and not a lot of memory.

(1) Set both the entry cache and db cache to 100 mb each offered the best performance - allowing the fs cache to do everything. This was on Solaris 10 with the latest updates for ZFS.
(2) modified the file system partitions to force direct io on the ones we didn't want being cached - this way we reserved more of the fs cache for LDAP.
(3) Like if your logs are on a separate FS then you can exclude that mount, etc. For the partition/disk that holds the database, set: "noatime" & "logging", and if "/data" is the disk that holds the database, set: tunefs -e 2097152 /data

Saturday, October 06, 2007

Host name vs Security Compliance

(1) update with latest security patches and fixes for OS, Tools, dependencies
(2) ensure no hostname is embeded in any UI component
(3) no host name in URL or CLI

Wednesday, October 03, 2007

News Analysis

I'm doing a Master's Thesis project based on scanning web-based news
topics and analyzing their frequency. I have a table containing rows that
represent groups of news stories about a particular topic


Adding Missing Values for Random Variables

Missing values for real-valued attributes could be implemented via Laplacian smoothing, by adding one initial default value to every attribute, either the mean of available observations for the attribute, or the mean of observations for some other similar subset of attributes, including possibly the set of all attributes.

Saturday, September 29, 2007

How to define a set of values

BNF is the way to define a set of values. This set of rule calls grammar ---- BNF grammar defines a set of values.

::= ()
::= (.)

Thursday, September 27, 2007

ZFS and DS

MODS/WRITES with ZFS compression
It would seem that using ZFS with compression could theoretically yield some amazing write performance but I haven't done any testing myself yet. Any figures or observations here?

CACHE COMPRESSION?
I'm not sure what they meant there because I have seen that ZFS uses ARC for relinquishing memory to cache However, the memory relinquishing algorithm was less than desirable for database apps . But cache compression was to minimize the memory footprint of the DS entries within the FS cache which sounds great, if it were possible, for large databases.

Lamda Calculus

lambda calculus, also λ-calculus, is a formal system designed to investigate (1)function definition, (2)function application (3) recursion.

Lambda calculus can be used to define (1) what a computable function is. The question of whether two lambda calculus expressions are equivalent cannot be solved by a general algorithm. This was the first question, even before the (2) halting problem for which undecidability could be proved.

Lambda calculus can be called the smallest universal programming language. It consists of a single

(1)transformation rule -----variable substitution

(2) function definition scheme.

Lambda calculus is universal in the sense that any computable function can be expressed and evaluated using this formalism. It is thus equivalent to the Turning Machine formalism.

Lambda calculus emphasizes the use of transformation rules(variable substitution) and does not care about the actual machine implementing them. It is an approach more related to software than to hardware.


Smba wit Solaris Zone

touch /etc/krb5.keytab

the smb.conf file was not configured to use kerberos keytab but it will try to.

Storage Data Classification & Storage Policies

Archive solution is designed for tape and disk storage devices. NAS package provides the file level representation of data. Storage Management System requires data classification and policies.

Data classification is done based on the file system and a few other attributes. The attributes include file name, type, ownership, size, access rights and age. Classification can done automatically using customer defined categories.

The customer defined data classification categories are assigned to storage management policies. Policies control the placement of the files onto the various types of physical storage, replication, deletion, and access.

DS replication and search

Enabling or disabling the replication should not impact the search

Oracle Database and Solaris Zone

The Oracle data base product line comes in two significantly different forms:

1) Oracle has a single machine version of their data base product. It supports zone deployment today.

2) Oracle has version of their data base product that works cooperatively and concurrently on multiple nodes. This product is called Oracle RAC. Oracle RAC does not run in zones now. But it
will do so soon

POSIX Thread, Select & Poll

The model of simply using one thread per connection, while simpler to code and support,
is viable for only a few to several hundred (possibly a few thousand) threads.

From a performance perspective, select(3C) has stated in the man page (from circa Solaris 2.6 days IIRC) "poll(2) function is preferred over this function." select(3C) in Solaris is very inefficient as it has to translate in user space from the bitfields and calls poll() to do the kernel work anyway. Not to mention that recompiling an application that uses select(3C) as a 64 bit binary can result in a 500% performance drop. This occurs if they do not modify FD_SETSIZE, as this goes from 1024 in 32 bit compiles to 65536 in 64 bit compiles. So avoid select like the plague if performance is at an issue for your application. Select has been implemented as a direct syscall in several other Unix versions so code calling select(3C) will likely underperform on Solaris compared to other OSs.

Even poll(2) suffers a lot of unnecessary performance degredation issues so, if dealing with thousands of descriptors, the far more scalable /dev/poll "man -s 7d poll" or Event Completion
Framework should be used. Here's an article wrote back in 2002 comparing scalability of
poll(2) and /dev/poll :

http://developers.sun.com/solaris/articles/polling_efficient.html

And it seems /dev/poll has been considered a bit too complicated for a lot of real world uses, and also suffers some performance issues in cases of quickly changing lists of file descriptors to
monitor, so the even newer Event Completion Framework/Ports have been created to deal with both issues :

http://developers.sun.com/solaris/articles/event_completion.html
http://partneradvantage.sun.com/protected/solaris10/adoptionkit/tech/tecf.html

Both /dev/poll and Event Ports have been ported to Linux, so, whether they are POSIX standards, extentions, or just de facto standards since they are the only tools to do the job if the state of many thousands of connections must be polled, they must be considered.

SAN Disk/ Tap backup with Solaris Container

Several systems are set up as SAN media servers so that backup to tapes occurs from SAN-disk to SAN-tape rather than across a network.

The simplest approach is for each non-global zone to send its backup stream to the media server. It avoids potential security issues which must be addressed if you backup the zones' file systems from the GZ.

DS database mini cache size heristrics

dbase cache at least enough to hold the indexes. It is true that during search DS first read the indexes then the entries, but I would not make any assumption about what it means for the database cache. Usually when coding a cache algorithm. It is true for the entry cache and the changelog cache, but I do not know if it is also true for the DB cache.

When there is not enough space to add some data, the data that was not used since the oldest time are removed and the new one replace it. LRU, MRU.

So if you have not enough data for both kind of data, you will rather ends up with the most frequently used indexes and entries pages staying in the cache while the others go in and out
IMHO, There is probably no "priority" about indexes versus entries ...

In modify things are much more complex: dn2entryid index is read first then entry then all the indexes are read.

System Call vs. POSIX Thread on Select & Poll

Solaris System calls select() (BSD) poll() (AT&T) are _not_ POSIX. BSD and AT&T unix used to have their own ways on this. POSIX threads is recommended than relying on select() or poll() if you really want to write POSIX compliant code. With POSIX threads you should be able to evade the need for using these two system calls. Alternatively you could supply an extra .c file which contains the call to the poll() system call; this file must be translated with the _POSIX_C_SOURCE macro undefined. (You will be calling poll() even if you call select(). As Solaris 2.x is derived from AT&T, poll(2) is used to implement select(3C)).

Thursday, September 06, 2007

XWindows on Solaris 10

It requires FQ hostname in case of VPN client access

(1) ssh -X -l
on remote host directly execute UI application

ssh -X forward x11, there is no need to export DISPLAY any more


(2) vnc

run vncserver on the remote server as

vncserver :1
may propmpt password if you do not have it now

on the local client

vncviewer :1
type in password

Thursday, August 30, 2007

SOA Enable C+

Many WS-enabling tools available on the market for C++, such as gSOAP, OSS, etc. Other options, especially if you use Java CAPS, are: TCP/IP, HTTP with non-SOAP payload, and JMS. SGF is always an option. Wrapping C++ with Java WS is not a very good idea.


The combination of gSOAP and JMS for integration with C++ and had a successful production deployment

Wednesday, August 29, 2007

Avoid of sysemt call time() on Solaris

Some product heavily uses times() syscall to measure cpu time and system time.
I'd like to get rid of any system call overhead if possible.

http://developers.sun.com/solaris/articles/time_stamp.html

This article shows optimization the performance of enterprise systems that employ extensive time-stamping using the time(2)

File Descriptor and Solaris Sockets

rlim_fd_max is the same as rlim_fd_curr then this means that they are only allocating 256 file descriptors. Sett the rlim_fd_max to at least 2048 or higher for highly loaded system

Friday, August 17, 2007

JMS design and analysis

(1) Design Pattern: Publish Subscription

1 Producer -----(1 Message) ---> Topic (Destination) ----> K Consumers
K Producers ----(K Messages) ---> Topic (Destination) ---> K Consumers

(2) Service Reliability: Message Reliability:

With normal topic subscription, only active consumers subscribed
to the topic will receive messages.

With durable subscriptions, the inactive consumers are ensured
to receive the message as subsequently becomes active.

Intuitively, the topic does not hold the messages it receives
unless it has inactive consumers with durable subscriptions.

Hence, durable subscription is the service reliability practice

(3) Best Practice-1: Development with Unified Domain Model

Even though the domain-specific interfaces (Queue, Topic)
are backward supported for legacy purposes, The best development
practice is to use the Unified Domain Interface,
which transparently support P-P, Pub-Sub Models.


(4) Best Practice-2: Use Administered Object Store

The connection factory is recommended to create and reconfigured
with the administration tools. Admin objects such as ConnectionFactory
is then placed in an Administered Object Store. It decouples JMS application
code and portable among different JMS providers.



(5) Best Practice-3: J2EE Client By Resource Annotation

Use @Resource and no exception handing in J2EE 1.5 as best
practices.

(5) Code Segment: to show message life cycle

Use Unified Domain createConnection method is in javax.jms.ConnectionFactory

Hashtable env = new Hashtable();
env.put(Context.INITIAL_CONTEXT_FACTORY,
"com.sun.jndi.fscontext.RefFSContextFactory");
env.put(Context.PROVIDER_URL, "file:///amberroad:/sun_mq_admin_objects");
Context ctx = new InitialContext(env);
String CF_LOOKUP_NAME = "ARConnectionFactory";
ConnectionFactory arFactory = (ConnectionFactory) ctx.lookup
(CF_LOOKUP_NAME);
String DEST_LOOKUP_NAME = "ARTopic";
Destination arTopic = (Destination) ctx.lookup(DEST_LOOKUP_NAME);



Connection arConnection = null;
Session arsession = null;
try {
arConnection = myFactory.createConnection("amberroad", "amberroad");
arConnection.setExceptionListener(this);
arsession = arConnection.createSession(false, Session.AUTO_ACKNOWLEDGE);
MessageProducer arProducer = arSession.createProducer(arTopic);
String arSelector = "/* Text of selector here */";
MessageConsumer
arConsumer = arSession.createDurableSubscriber(arTopic, "arConsumer", arSelector, true);
TextMessage arMsg = mySession.createTextMessage();
arMsg.setText("AmberRoad Test");
// at message level specification
arMsg.setJMSReplyTo(replyDest);
arMsg.setJMSDeliveryMode(DeliveryMode.PERSISTENT);
arMsg.setDisableMessageTimestamp(9L);
arMsg.setJMSPriority(Message.DEFAULT_PRIORITY);
arMsg.setJMSExpiration(1000L);
arProducer.send(arTopic, arMsg);

// if it is not at message level, at producer level specification

arProducer.send(arDest, arMsg, DeliveryMode.PERSISTENT, 9, 1000);


// if set up async listener

ARMessageListener arListener = new ARMessageListener();
arConsumer.setMessageListener(arListener);


// blocking receiver
arConnection.start();
Message inMsg = arConsumer.receiveNoWait(); // distributed apps

outMsg.clearBody();// consumer does
--------------------

arConnection.stop(); // in case to suspend messaging

finally{

mySession.unsubscribe("arConsumer");
arConnection.close();

}

Note


(1) for the publish/subscribe domain to create durable topic subscriptions,
a client identifier arrangement is done by configuring the client runtime
to provide a unique client identifier automatically for each JMS application.

(2) For message consumption, with auto-acknowledge mode, the Message Queue
client runtime immediately sends a client acknowledgment for each message
it delivers to the message consumer; it then blocks waiting for a return
broker acknowledgment confirming that the broker has received the client
acknowledgment. It leaves JMS application code free.


(3) For message producers, the broker’s acknowledgment behavior
depends on the message’s delivery mode defined in Message Header.
The broker acknowledges the receipt of persistent messages for persistent messages
but not of non persistent ones; It is not configurable by the client.




For Receiving Messages Asynchronously



public class ARMessageListener implements MessageListener
{
public void onMessage (Message inMsg) throws JMSException
{
Destination replyDest = inMsg.getJMSReplyTo();
long timeStamp = inMsg.getLongProperty("JMSXRcvTimestamp");
Enumeration propNames = inMsg.getPropertyNames();
String eachName;
Object eachValue;

while ( propNames.hasMoreElements() )
{
eachName = propNames.nextElement();
eachValue = inMsg.getObjectProperty(eachName);

}
String textBody = inMsg.getText();

}
}

Thursday, August 16, 2007

Weka running for Linux and Unix

(1) Under package weka.gui

LookAndFeel.props

(2) uncomment the first configuration

# Look'n'Feel configuration file
# $Revision: 1.1.2.2 $

# the theme to use, none specified or empty means the system default one
Theme=javax.swing.plaf.metal.MetalLookAndFeel
#Theme=com.sun.java.swing.plaf.gtk.GTKLookAndFeel
#Theme=com.sun.java.swing.plaf.motif.MotifLookAndFeel
#Theme=com.sun.java.swing.plaf.windows.WindowsLookAndFeel
#Theme=com.sun.java.swing.plaf.windows.WindowsClassicLookAndFeel

Wednesday, August 15, 2007

Classification Time Complexity

NBTree is O(n^3)
Decision trees is O(n^2)
Naive Bayes is O(n).

Sunday, August 12, 2007

NBTree Algorithmic Time Complexity

NBTree uses cross-validation at each node level to
make a decision to split or construct a naive Bayes
model.

Friday, August 10, 2007

Curse of dynamic programming and stochastic control

(1) Curse of parameteric approximation to cost-to-go function
(2) modeling without closed form objective function

function approximation, iterative optimization, neural network learning, dynamic programming

Curse of dynamic programming and stochastic control

(1) Curse of parameteric approximation to cost-to-go function
(2) modeling without closed form objective function

Monday, August 06, 2007

Attribute Selection With Weka

Uing the tab "Select attributes" in Weka to do some
wrapper-based feature subset selection. I encounter several different
search methods.

Greedy stepwise with parameters
"conservativeForwardSelection" = False
"searchBackwards" = False

It does forward selection starting from an empty set of
attributes. It stops adding attributes as soon as there
is no single addition that improves apon the current
best subset's merit score.


Greedy stepwise with parameter
"conservativeForwardSelection" = True, and
"searchBackwards" = False

It does the same and it will continue to add new features as
long as it does not decrease the merit of the current best subset.


BestFirst with parameter "direction" = Forward
BestFirst is a beam search. It allows backtracking to
explore other promising search paths.

The "More" button in the GenericObjectEditor
when selecting/altering parameters for
search methods in the Explorer.

Thursday, August 02, 2007

Zone and CPU shares

$ pooladm -e
$ pooladm -s
$ pooladm -c

$ poolcfg -c 'create pset pset1 (uint.min = 2 ; uint.max = 2)'
$ poolcfg -c 'create pset pset2 (uint.min = 1 ; uint.max = 1)'

$ poolcfg -c 'create pool pool1'
$ poolcfg -c 'create pool pool2'

$ poolcfg -c 'associate pool pool1 (pset pset1)'
$ poolcfg -c 'associate pool pool2 (pset pset2)'

$ pooladm -c

Assuming, the zones are up & running:
$ poolbind -p pool1 -i machine1
$ poolbind -p pool2 -i machine2
$ poolbind -p pool2 -i machine3
$ poolbind -p pool2 -i machine4
$ poolbind -p pool2 -i machine5

To make the bindings persistent, use
$ zonecfg -z set pool=

Sun Cluster supports Zone

SC3.2 does support treating a zone as a cluster node. See http://docs.sun.com/app/docs/doc/819-6611/6n8k5u1mc?a=view#gdamq

Zone memory limits and SWAP issues

There is no memory limit for Zones. A zone's processes are allowed to
use as much RAM and swap as they want. Resource controls can do this

Fork calls fail when there isn't enough swap space. It returns with error
ENOMEM, which is 12. The second failure in your log below returned a status
of 12.

Sample Size and Dimensionality

Sample size and dimensionality are critical to parametric optimization
of machine learning and prediction. Small datasets with high
dimensionality poses the low ROC problem in research community.

A naive Bayes classifier (Maron, 1961) is a simple probabilistic
classifier based on applying Bayes’ theorem with strong independence
assumptions. Depending on the precise nature of the probability model,
naive Bayes classifiers can be trained very
efficiently in a supervised learning setting. In many practical
applications, parameter estimation for naive Bayes models uses the
method of maximum likelihood. Recent researches on Bayesian
classification problem has shown that there are some theoretical reasons
for the apparently unreasonable efficacy of naive Bayes
classifiers(Zhang, 2004). Because independent variables are assumed,
only the variances of the variables for each class need to be determined
and not the entire covariance matrix. Hence, Naive Bayes classifier
requires small training data for classification prediction.

Support vector machines (SVMs) is another set of supervised learning
methods for classification (Cortes & Vapnik, 1995). It maps input
vectors to a higher dimensional space where a maximal separating
hyperplane is created. Two parallel hyper-planes
are constructed on each side of the hyperplane that separates samples.
The separating hyperplane is the hyperplane that maximizes the distance
between the two parallel hyper-planes. The larger the margin or distance
between these parallel hyper-planes is, The better the generalization
error of the classifier will be. It requires large samples.

TCP monitoring

(1) List all tcp tunnables

ndd /dev/tcp \?

(2) get a tcp tunnable values

ndd -get /dev/tcp tcp_conn_req_max_q0

(3) set a tcp tunnable

ndd -set /dev/tcp tcp_conn_req_max_q0

Tuesday, July 31, 2007

divide my dataset to subsets to perform some experiments on each

You can add an ID attribute to all your data using the AddID filter in
weka.filters.unsupervised.attribute. Following this you can create your
splits explicitly using filters in the weka.filters.unsupervised.instance package (e.g. RemovePercentage, RemoveRange and RemoveFolds) or use the cross-validation (or
percentage split) evaluation options in the Explorer. In order to make
sure that the ID attribute is not used by the learned models you can
use the weka.classifiers.meta.FilteredClassifier in conjunction with
your chosen classifier and the weka.filters.unsupervised.attribute.Remove filter in order to remove the ID attribute just prior to constructing a classifier (and at
testing time too). With the current snapshot of the developer
version of Weka, you can also output additional attributes alongside the
predictions (in your case, the ID attribute).

Applying Quantum Principles to GAs for Multiple Objective Scheduling

Multi-objective scheduling problem has been proposed in literature (T'kindt et. al., 1996). However, conventional meta heuristics-based methods, such as GA algorithms, were studied with single objective function to derive combinatorial optimization.

Recent advances of GAs and multi-objective researches (Han et. al., 2005) applied principles of Quantum Computers to stochastic optimization problems. In addition, Q bit representation and permutation-based GA was advocated by researchers (Li & Wang, 2007). For multi-objective scheduling Q bit GA needs to obtain good approximation for both co-operative and competitive task environment. Moreover, Q bit-based permutation, crossover operators, selection, encoding, generation processes and fitness values are required to explore and exploit the large or high dimensional state spaces. It is a relaxed minimization problem with the Q-bit.

Algorithm evaluation needs to combine a vector of all local objective function values for each job. However, local optimum per job and global optimum for entire task environment indicates further layer of constraint based satisfaction with enumeration of iterative policy and state transitions. Other than the optimization of enumeration, parallelization may need to considered at both system and application level. Furthermore, pipelined processing and data parallelization are critical to reduce both time and space complexities.

Han et. al. (2005). A quantum-inspired genetic algorithm for flow shop scheduling. Springer-Velag.
T'kindt et. al.(1996). Multicriteria Scheduling: Theory, Models and Algorithms. Springer-Velag, 2002.

Enable and Disable dhcpagent

To enable and disable DHCP Agent

sys-unconfig

Thursday, July 26, 2007

Balance Dataset

A dataset with 2 unbalanced classes. 7,500 rows belong to Class A
and 2,500 rows belong to Class B. How do I randomly select rows from
Class A and Class B to balance the dataset


Using weka.filters.supervised.instance.SpreadSubsample with a
value of 1 (uniform) for the distributionSpread parameter.

Tuesday, July 24, 2007

Selective Attribute for classification training

The easiest way to do this would be to first make a copy of the
Instances object (Instances copyt = new Instances(train)) holding your
training data. Then for each instance that has values that you don't
want to train on set them to "missing". I.e. assume i is an instance
and the value at attribute 3 is to be skipped when training naive
Bayes, then i.setMissing(2) would do the trick. Note, that this
approach is specific to the way that naive Bayes works.

BI Feature Selection

A BI Feature selection descritipn:


"The random search method (weka.attributeSelection.RandomSearch)
in the attribute selection package that can be combined with any
feature subset evaluator in order to search for good subsets randomly.
If no start set is supplied, Random search starts from a random point
and reports the best subset found. If a start set is supplied, Random
searches randomly for subsets that are as good or better than the
start point with the same or or fewer attributes."

But heuristically, with some confidence, the 50000
features selected using Chi-squared correlation produce a more
accurate SVM model than 50000 features selected uniformly at random.

Thursday, July 19, 2007

CPU performance counter

(1) AMD Performance Counter, http://developer.amd.com/article_print.jsp?id=90
(2) cpustat and cputrack , trapstat
(3) lib.cpc

ID Problem Formulation

(1) Since I formulated the ID as stochastic DP problem. It owns
properties
of dynamic.
(2) To handle large state space and unknown attack type, the DP problem
transformed into adaptive tuning problem. Adaptiveness in terms of tunning
the networks interactively.

The properties of problem formulation has been addressed in the problem
formulation section. I have mathematical proof for the above items. It has been
further addressed in the methodology section of the formal paper.

As for the time constraints, do you mean I need time to implement the
entire mathematical framework as a software ? If so, it is true that I need to implement it
myself since existing tools such as MatLab only handle traditional weights tuning for fixed
neuro networks. In addition, no RL toolbox yet. This research counts on my own implementation for proposed algorithmic operations. As for the dataset preprocessing, it will not be issue for
me since I/O formating is ok for me.

False alarm ratio is only evaluated from known attacks from research
point of review. In real time system operation, it can not be proved by research framework.
This is the motivation 67to come up with "Tuning" framework for online detection in order to
reduce the false alarm ratio.Hence, false alarm problem is relaxed as the problem of "Tuning" of DP 5tproblem. This is one 1of the major advantagesyt of this research proposal.

Dataset is only critical for traditional neuro learning but not this
research. parameter is not restricted for this research either. All these are traditional neuro
learning problem. That is 238x3ethe motivation to propose RL based "Tuning" framework. This is one of major advantages of this
research proposal. For the specific host and network attack (spoofing
and memory overflow)
I have mathematical proof for this research. For the implementation, I
have to start with arbitrary
parameters and architecture. It is important to know that there is no
readily training set of state
and ROC function in DP context. The possible way is to evaluate the ROC
function by simulation
state decisions. Afterwards, using RL based interactive algorithm to
improve the ROC value. That
is the most key point of the research design.

Versioning Manager Output

Singleton VersioningManager can output any JComponent to the console

NB Back End Threaded Progresses

(1) Runnable for backend long run processes and Progress Handle management
Runnable allRunnable = new Runnable() {
WorkspaceMgr mgr = null;
Workspace ws = PerforceConfig.getDefault().getDefaultWorkspace();
ProgressHandle handle = null;

public void run() {
try {
handle = ProgressHandleFactory.createHandle(NbBundle.getMessage(PerforceAnnotator.class, "CTL_PopupMenuItem_ViewAllChangelist"));
mgr = new WorkspaceMgr(ws);
handle.start();
showAllChangelist(changes);
SwingUtilities.invokeLater(new Runnable() {

public void run() {
ChangelistView comp = ChangelistView.findInstance();
VersioningOutputManager.getInstance().addComponent(NbBundle.getMessage(PerforceAnnotator.class, "CTL_PopupMenuItem_ViewAllChangelist"), comp);
comp.setContext(ctx, changes);
comp.setDisplayName(NbBundle.getMessage(PerforceAnnotator.class, "CTL_PopupMenuItem_ViewAllChangelist"));
//comp.open();
comp.requestActive();
}
});
} catch (WorkspaceException ex) {
Exceptions.printStackTrace(ex);
} finally {
handle.finish();
}
}
};

(2) UI current thread context

if (cmd.equals(NbBundle.getMessage(PerforceAnnotator.class, "CTL_PopupMenuItem_ViewAllChangelist"))) {
RequestProcessor.getDefault().post(allRunnable);
}

(3) So, current thread, spawn thread for progress bar and backend process, back to UI population

Performance Analysis and Methodology

Performance analysis and methodologies are very broad topics. It is about optimization. Performance as a set of Bellman equations to solve. Traditional states and performance functions
enumerations does not address the performance evaluation issues
since a traditional MDP problem results in large state transitions
or high dimensional performance feature extractions. For a networking
only related problem formulation may involved into 30+ performance
parameters and for Solaris kernel, it involves 100+ parameters.
Hence, dynamic and adaptive performance analysis and associated
resource utilization analysis may reach the optimum
performance function evaluation with fast convergence.

It is involved with Performance Metrics (Parameters, feature extractions), Performance Functions, Performance Evaluations, Performance Learning, Performance Instrumentation, Performance Management and Adaptive Tuning etc. It really depends on specific issues to formulate the specific problem into adequate function, models to resolve the performance issues.

In addition, from CS analysis and design methods such as dynamic programming, divide by conquer, greedy and amortization. They are popular techniques to address performance from subproblem to global problems. However, to achieve end-to-end performance gains such as network tuning, global optimum may be the most concerns instead of local optimum. In addition, queuing theory has been widely adopts for traditional SMP based performance management and capacity planning.
Core based parallelism and pipelining introducing many new issues down
the road. Is queuing still works well for parallelism paradigm, if not
what will be the optimization, if yes, what will be proper queue
partitioning etc


In general quantitative methods should be the main theme of the analysis
and evaluation. It is hard to generalize as a whole but specific to
the target problem formulations.

P4 Annotation For Security Compliance Auditing

P4 annotate is the solution to discover the code change per version and who submit the change

Friday, July 13, 2007

P4 job

(1) P4 job specification can be customized
(2) create p4 job with the above specification
(3) lookup jobs assigned to specific developers
(4) developer edit src and submit the changelist
(5) developer run "fix" to associated submitted "changeNo" with job to ensure job is in closed state

P4 Labeling

It is not encourage to use label but changelist.

P4 branch

(1) Branch can be created with "integrate" for the "From Files" to "To Files" following by "submit"
However, Branch is the best practice to create branch. Because, with branches,
integrate -b branchname -r (bi-direction population can happen)
(2) any working branch can only populate the change with "integrate" with "submit" again. All conflicts willl be reported during submission phases.
(3) resolve action will be taken by user

HTTP/S for Performance Management Analysis/Report

First, it is a common engineering practice for
agent to collect data for analysis and report
layers in system management space. It happens
to all industrial players.
In addition, three tier performance management
architecture is also considered as the best
practice to scale to large data center performance
management. It can be further processing for event
collaboration for even larger performance
management crossing data centers.

Second, HTTPS is a security compliant requirement.
It is part of compliance practice.

The only limitation is that it is coming from
hosts and server domain management. If goes to
small devices management such as fans or power controller
The only methods now is SNMP or SNMP/S.

Crypto on T2000

Crytpo framework supports SCA6000 crypto provider for all NG zones, if it's configured.changes to the crypto framework config can only be done from the Global zone. You can list the providers from within the non-global zone but cannot change.

Thursday, July 05, 2007

Solaris Resource Management

From resource control, prcess level
control will pass in pid which is run
time only. So we create project to
assigned user progress to modify the
resource control per project based without
knowing pid. Of course, I want to try
pid too.


(1)Create a project
# projadd -U progress -p 8888 openedge
(2) projmod -c "It is project for resource control on openedge database" openedge
(3)List projects created
# project -l
ksh: project: not found
# projects -l
system (System built-in project, project id 0)
projid : 0
comment: ""
users : (none)
groups : (none)
attribs:
user.root (System built-in project, project id 1)
projid : 1
comment: ""
users : (none)
groups : (none)
attribs:
noproject (System built-in project, project id 2)
projid : 2
comment: ""
users : (none)
groups : (none)
attribs:
default (System built-in project, project id 3
projid : 3
comment: ""
users : (none)
groups : (none)
attribs:
group.staff (System built-in project, project id 10)
projid : 10
comment: ""
users : (none)
groups : (none)
attribs:
openedge (Project we created with designated id 8888)
projid : 8888
comment: "It is project for resource control on openedge database"
users : progress
groups : (none)
attribs:
(3) check project membership


id -p

# id -p
uid=0(root) gid=0(root) projid=1(user.root)

You can see root belongs to built-in project id1 which is user.root

# prstat -J
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
707 noaccess 222M 133M sleep 59 0 0:01:47 0.0% java/55
1025 root 4776K 4232K cpu8 59 0 0:00:00 0.0% prstat/1
118 root 5216K 4728K sleep 59 0 0:00:02 0.0% nscd/26
117 root 4640K 4008K sleep 59 0 0:00:00 0.0% picld/4
125 root 2592K 2080K sleep 59 0 0:00:00 0.0% syseventd/14
258 daemon 2752K 2432K sleep 59 0 0:00:00 0.0% statd/1
371 root 4856K 1672K sleep 59 0 0:00:00 0.0% automountd/2
97 root 2552K 2176K sleep 59 0 0:00:00 0.0% snmpdx/1
55 root 9160K 7528K sleep 59 0 0:00:01 0.0% snmpd/1
308 root 2080K 1224K sleep 59 0 0:00:00 0.0% smcboot/1
259 daemon 2432K 2136K sleep 60 -20 0:00:00 0.0% nfs4cbd/2
249 root 2728K 1632K sleep 59 0 0:00:00 0.0% cron/1
9 root 11M 10M sleep 59 0 0:00:19 0.0% svc.configd/17
7 root 19M 17M sleep 59 0 0:00:08 0.0% svc.startd/12
136 daemon 4680K 3528K sleep 59 0 0:00:00 0.0% kcfd/5
PROJID NPROC SIZE RSS MEMORY TIME CPU PROJECT
1 5 10M 9488K 0.0% 0:00:00 0.0% user.root
3 1 1376K 1280K 0.0% 0:00:00 0.0% default
0 37 390M 257M 0.7% 0:02:21 0.0% system


Total: 43 processes, 218 lwps, load averages: 0.01, 0.01, 0.01

# id -p root
uid=0(root) gid=0(root) projid=1(user.root)
# id -p daemon
uid=1(daemon) gid=1(other) projid=3(default)
# id -p noaccess
uid=60002(noaccess) gid=60002(noaccess) projid=3(default)


svcadm enable system/pools:default (resource pools framework)
svcadm enable system/pools/dynamic:default (dynamic resource pools)
svcadm enable svc:/system/pools:default (enable DRP service)

Check if pool services and dynamic pool service are enabled

# svcs *pool*
STATE STIME FMRI
online 11:10:01 svc:/system/pools:default
online 11:11:05 svc:/system/pools/dynamic:default

Share Memory setting in Soalris 10 or +

(1) If it is a shared memory issue, it may not be zone specific but S10 specific. Since it
/etc/system is not good practice but following S10 resource control practices
(2) Using prctl to project or even process based control. However, you need to create
project and assign user which running the process to the project. I have experienced
some bugs before to do this assignment. However, to login as the user then su to
root to assign the project
(3) projmod -s -K "project.max-shm-memory=(privileged, 8GB, deny) xxx

You may need to run it in global zone first before moving into local zone.

Core and LDOM Performance Management

Core and LDOM based system operations introduces parametric
modeling and approximation optimization problems from
traditional execution time to throughput, IPC, parallelism
and pipelining. This is applied to OS modeling and
performance management. It impacts the predictive
monitoring, analysis and reporting. This has been my
engineering interests since I worked in
system management and integration spaces.

Tuesday, July 03, 2007

From CMT perspective, out T1, AMD, Intel x86 platforms does introduce variance from traditional
SMP platforms. First, from HW platform point of view, physical processor structures changed to core
based, Second from Solaris point of view, kernel CPU structures, CPC counters, CPUTrack, CPUStat
Kstat changed (including all core changes), Third, from system management point of SNMP MIBII
database structure changed. This will impact current system management parametric model learning and system management view design.
From LDOM perspective, more virtualization based predictive modeling and reporting design needs to
be enhanced to predict and measure physical resources, kernel CPU structures, CPC counters and MIB
structures.

In general, to have system management to be earlier CMT and LDOM adaptor, they could need some level of support from Sun. This is just a proactive assessments.

Friday, June 29, 2007

Load Generation Appliance

Spirent AVALANCHE LOAD TESTING APPLIANCE

http://www.spirentfederal.com/

Monday, June 25, 2007

S12 Compliation flag

S12 update compilation flag: -xarch=v9 is deprecated, use -m64 to create 64-bit programs

Wednesday, June 20, 2007

Long Pause GC ?

GC can be well tunned from 1.5.x above.

http://www.sun.com/bigadmin/content/submitted/cms_gc_logs.html
http://java.sun.com/performance/reference/whitepapers/tuning.html
http://twiki.sfbay.sun.com/pub/MDE/ISVESystemsProjects/TS-2885-14.pdf

Interesting military Security Training, Open Solaris

http://www.gcn.com/print/26_09/43562-1.html

NB Debug Thread Dead Lock

It seems SMP helps NB debugger for current build

Tuesday, June 19, 2007

Workspace Management

Main Menu Action is triggered by "Annotator"

(1) CreateWorkspaceAction -- Currently Setup Workspace and save the extra UpdateWorkspace Action
This action is triggered by "Annotator" getAction routine
(2) Needs Remove Workspace Action to remove current Preferences cache and Remote depot.
This action is triggered by "Annotator" getAction routine
(3)

Monday, June 18, 2007

NB Short Build

cvs -d :pserver:leiliu@cvsnetbeansorg.sfbay.sun.com:/cvs login
cvs -d :pserver:leiliu@cvsnetbeansorg.sfbay.sun.com:/cvs co -P standard_nowww

cvs -d :pserver:leiliu@cvs.netbeans.org:/cvs login
cvs -d :pserver:leiliu@cvs.netbeans.org:/cvs co -P standard_nowww


ant -Dcluster.config=standard

or may need to try ant build-nozip first.


With CVS functions

ant -Dcluster.config=basic

NB cached .netbeans dir caused problem

(1) It is recommended to clean up the .netbeans dir under $HOME in order to have new build working
fully.
(2) a short build

cvs -d :pserver:leiliu@cvsnetbeansorg.sfbay.sun.com:/cvs login
cvs -d :pserver:leiliu@cvsnetbeansorg.sfbay.sun.com:/cvs co -P standard_nowww

cvs -d :pserver:leiliu@cvs.netbeans.org:/cvs login
cvs -d :pserver:leiliu@cvs.netbeans.org:/cvs co -P standard_nowww


ant -Dcluster.config=standard

or may need to try ant build-nozip first.

Saturday, June 16, 2007

NB Plugin

1. rename cvsmodule as perforce
2. change all project and service meta data ->>>>> Here is PerforceVCS
3. replace.sh cvs -> perforce, CVS -> Perforce
4. PerforceRoot change back to CVSRoot first.

We can get a build now.

Change Menu action name: Each Action has

public String getName() {
return NbBundle.getBundle(CreateWorkspaceAction.class).getString("CTL_MenuItem_XX_Label");
}

CTL_MenuItem_XX_Label is located in Bundle.properties file of each action package

Friday, June 15, 2007

Problem of Broadcom Wireless Controller on S11 with Acer Ferrari 3400 laptop

I have ferrari 3400 latop with S11 OS kernel 5.11 snv_64a i86pc i386 i86pc

(1) Locate Wireless Controller as Broadcom BCM4306 802.11b/g Wireless LAN Controller


# /usr/X11/bin/scanpci -v


pci bus 0x0000 cardnum 0x09 function 0x00: vendor 0x14e4 device 0x4320
Broadcom Corporation BCM4306 802.11b/g Wireless LAN Controller
CardVendor 0x185f card 0x1220 (Wistron NeWeb Corp. TravelMate 290E WLAN Mini-PCI Card)
STATUS 0x0000 COMMAND 0x0006
CLASS 0x02 0x80 0x00 REVISION 0x03
BIST 0x00 HEADER 0x00 LATENCY 0x40 CACHE 0x00
BASE0 0xd0014000 addr 0xd0014000 MEM
MAX_LAT 0x00 MIN_GNT 0x00 INT_PIN 0x01 INT_LINE 0x0a
BYTE_0 0x01 BYTE_1 0x00 BYTE_2 0xc2 BYTE_3 0x07


(2) modinfo | grep bcm

Found the driver is not loaded

(3) Manually loaded driver

modload /kernel/drv/amd64/bcmndis


# modinfo | grep bcm
201 fffffffff7a39000 a3598 222 1 bcmndis (bcmndis(ndis wrapper 1.6))


(4) # grep bcm /etc/driver_aliases
bcmndis "pci14e4,4320"
bcmndis "pci14e4,1a"

(5) update_drv -a -i '"pci14e4,1a"' bcmndis
("pci14e4,1a") already in use as a driver or alias

(6)
dladm show-link
bcmndis0 type: legacy mtu: 1500 device: bcmndis0
bge0 type: non-vlan mtu: 1500 device: bge0


(7) ifconfig bcmndis0 plumb

(8) ifconfig -a
lo0: flags=2001000849 mtu 8232 index 1
inet 127.0.0.1 netmask ff000000
bge0: flags=201004843 mtu 1500 index 2
inet 192.168.1.100 netmask ffffff00 broadcast 192.168.1.255
ether 0:c0:9f:9e:41:5
ip.tun0: flags=10010008d1 mtu 1366 index 3
inet tunnel src 192.168.1.100 tunnel dst 192.18.32.151
tunnel security settings esp (aes-cbc/hmac-md5)
tunnel hop limit 60
inet 129.150.13.3 --> 129.145.40.124 netmask ffffffff
bcmndis0: flags=201000842 mtu 1500 index 4
inet 0.0.0.0 netmask 0
ether 0:b:6b:4c:4a:ec
(9)
wificonfig -i bcmndis0 scan
essid bssid type encryption signallevel


It failed to discovered any router or access point.



However, if we boot as 32 bit kernel then it works

Adaptive Buffer Tuning for Data Intensive Algebraic Operations in Purpose of Parallel and Distributed Processing

Both pervasive directional graphs and intensive algebraic operations require buffer management for stochastic data processes with constrained computing resources. Algebraic computation states in final stages tend to be readily identified within finite time horizon by sensing very abrupt transitions in system and network state spaces. But in early stages of constraints, these changes are hard to predict and difficult to distinguish from usual state fluctuations. Dynamic buffer allocation and replacement are the major techniques to construct structures for algebraic operations to ensure finite resource assesses. Hence, dynamic buffering function and control optimization are the major primitives to construct utilities for stochastic system processes to ensure converged resource accesses. To provide adaptation to large dimensional states, this research proposes a formal model-free buffer utility framework rooted from reinforcement learning methods and dynamic programming techniques to provide self organization of buffers to exploit parallel based buffer tuning processes. To time and space complexity reduction within the large state spaces, dynamic hidden neurons with incremental tuning is proposed for non-linear value function approximation to derive optimization procedures for optimal algebraic computational policies. For numeric and information evaluation, convergence analysis and error estimation are presented. Finally, a simulation test-bed and tuning results are deliberated.

CVS Server Setup on Solaris 10

(1) download cvs binary
(2) init cvs repository
a. create repository root directory /usr/local/cvs-repository
b. create a Solaris system user/group as: cvs/cvs
c. grant ownership of /usr/local/cvs-repository to Solaris user/group cvs/cvs
d. cvs -d /usr/local/cvs-repository init
This will create repository CVSROOT under /usr/local/cvs-repository
(3) create cvs user/password
a. create a password file as "passwd" under CVSROOT directory
b. user pl file below to create encoded password for cvs user
c. assign password to solaris user "cvs"

The CVS password file is CVSROOT/passwd in the repository. It was not
created by default when you ran cvs init, because CVS doesn't know for
sure that you'll be using pserver. Even if the password file had been
created, CVS would have no way of knowing what usernames and passwords
to create. So, you'll have to create one yourself; here's a sample
CVSRoot/passwd file:

::
Here is the perl script to generate encoded password
#!/usr/bin/perl

srand (time());
my $randletter = "(int (rand (26)) + (int (rand (1) + .5) % 2 ? 65 : 97))";
my $salt = sprintf ("%c%c", eval $randletter, eval $randletter);
my $plaintext = shift;
my $crypttext = crypt ($plaintext, $salt);

print "${crypttext}\n";


I keep the preceding script in /usr/local/bin/pass.pl:

pass.pl "passwd"

output : Urmh23wFp1aOs

Then use the output passwd adding line in CVSROOT/passwd file

cvs:Urmh23wFp1aOs:cvs

(Here we create cvs user and solaris user same as "cvs")


The format is as simple as it looks. Each line is:

::

c. in the /etc/inetd.conf add one line as

cvspserver stream tcp nowait root /opt/sfw/bin/cvs cvs --allow-root=/usr/local/cvs-repository pserver


d. On Solaris 10 inetd.conf change does not take effect other than SFM service profile.
using "inetconv" command to convert the inetd.conf as /var/svc/manifest/network/rpc/100235_1-rpc_ticotsord.xml
In addition, add online in /etc/services to give permission
cvspserver 2401/tcp
e. this service file will be auto started by SFM daemon
f. verify pserver service is started
# svcs | grep cvs
online 14:18:27 svc:/network/cvspserver/tcp:default
g. use login test

cvs -d :pserver:cvs@:/usr/local/cvs-repository login

Thursday, June 14, 2007

Kernel Module Load and Network setup

1. all amd64 drivers

/kernel/drv/amd64

2. acer 3400, bcmndis is the WIFI driver

3. modinfo | grep bcmndis

to check if the module is loaded

4. modload bcmndis

5. ifconfig bcmndis0 plumb

6. wificonfig -i bcmndis0 scan

7. setup connection with wificonfig

8. there is link to update broadcom driver

http://blogs.sun.com/pradhap/entry/ferrari_4000_flash_install

Wednesday, June 13, 2007

Setup Acer Solaris x86 WIFI

Acer Aspire 9300, Solaris X86, Atheros Wifi NIC
Submitted by spp on Tue, 2006-10-31 12:22.

Got the Atheros 802.11abg NIC working on my new Acer laptop under Solaris X86. I tried to follow the instructions at the atheros driver page, but they are a little out of date. The atheros driver has been integrated into OpenSolaris, so only one or two instructions are correct. However, it did put me on the right track.

First we need to make sure the driver is attached and we can start the interface

1. Find the vendor and device IDs
#/usr/X11/bin/scanpci
pci bus 0x0004 cardnum 0x05 function 0x00: vendor 0x168c device 0x001a
Atheros Communications, Inc. AR5005G 902.11abg NIC
2. Check in /etc/driver_aliases for Atheros (ath) mappings. Format of file is 'alias "pciXXXX,YYYY"' where XXXX is "vendor 0xXXXX" and YYYY is "device 0xYYYY" minus any beginning zeros.
#grep ath /etc/driver_aliases
ath "pci168c,13"
ath "pci168c,1014"
3. update the driver to include the new device (note that the single quotes are needed in order to pass through the double quotes
#update_drv -a -i '"pci168c,1a"' ath
4. now, plumb the interface
#ifconfig ath0 plumb
5. either now, or before the plumb, you can find out what wifi access points are available
#wificonfig scan
essid bssid type encryption signallevel
you should see a list here

At this point, the instructions say that if you aren't running authentication, you can just use "ifconfig ath0 dhcp", but I am using encryption, so I moved on to trying to use wificonfig. Unfortunately, there are some mistakes here (possibly out of date and changed, not strictly incorrect). The biggest issue I found was that the instructions always reference using "-i [interface]", but that option isn't valid for most of the configuration (and the error message doesn't really make it easy to see).

1. create a profile to store my ESSID and WEP information in (write-only profile, non-readable). Names changed for security... not that wepkey# has to be the actual WEP key, not the passphrase, which makes life significantly more difficult.
#wificonfig createprofile home essid=HOME encryption=WEP wepkey1=10hexkey
2. activate the profile
#wificonfig connect home
wificonfig: connecting to profile 'home'
3. Now, like the earlier instructions, you can start dhcp
#ifconfig ath0 dhcp
4. And, make sure we have connection
#ifconfig ath0
ath0: flags=201004843 mtu 1500 index 3
inet 192.168.21.100 netmask ffffff00 broadcast 192.168.21.255
ether 0:16:cf:6f:a:92

Tuesday, June 12, 2007

LDAPv3 Unauthenticated binding

LDAPv3 specifications have introduced a unituitive feature with regards to authentication : the unauthenticated bind.
When an LDAP application provides a DN but no password, the Bind request is succesfull, BUT the user is not authenticated and has the same access rights as an Anonymous user.

Note that DS 6.0 now has a configuration parameter to disable unauthenticated Binds, and remove this unconventional authentication "feature" of LDAPv3.

LDOM Virtual Disk

LDom vdisks are not SCSI disks . Therefore, the missing SCSI target ID
and disks have the name cNdNsN.

Bind raw or block disk to disk service

bind a raw disk (/dev/rdsk/c1t1d0)
bind to a block disk (/dev/dsk/c1t1d0) to a disk service.

Zones and ZFS Pool

Two zones created on a cluster in node 1 in a ZFS pool, since I need to have the zones in an installed state on each node in the cluster. There is a way to bypass having to install the zones on each node in the cluster? It is using shared storage and moving the zfs pools back and forth.

Just three simple steps

1. zonecfg -z zone1 export>myfile
2. failover the storage with the root path to the second node
3. get myfily over to the second node
4 configure the zone with zonecfg export -f myfile
5 attach the zone with -F

NB Build and run

export ANT_OPTS="-Xmx196m"


mkdir netbeans
cd netbeans
cvs -d :pserver:leiliu@cvs.netbeans.org:/shared/data/ccvs/repository -q co nbbuild
ant -f nbbuild/build.xml checkout
ant -f nbbuild/build.xml

netbeans/nbbuild/netbeans/bin/netbeans to invoke IDE

FIPS compliance Security Crypto Module

Federal Information Processing Standards (FIPS) are publicly announced standards developed by the United States Federal government for use by all non-military government agencies and by government contractors. Many FIPS standards are modified versions of standards used in the wider community (ANSI, IEEE, ISO, etc.)


The National Institute of Standards and Technology (NIST) issued the 140 Publication Series to coordinate the requirements and standards for cryptographic modules which include both hardware and software components for use by departments and agencies of the United States federal government. FIPS 140 does not purport to provide sufficient conditions to guarantee that a module conforming to its requirements is secure, still less that a system built using such modules is secure. The requirements cover not only the cryptographic modules themselves but also their documentation and (at the highest security level) some aspects of the comments contained in the source code.

http://en.wikipedia.org/wiki/FIPS_140
http://en.wikipedia.org/wiki/Federal_Information_Processing_Standard

Sun's Cryptographic Accelerator 6000 provides exactly such a storage mechanism and API/tool set. The Cryptographic Accelerator is available for Solaris (SPARC and x86/x64) and Linux and is FIPS 140-2 Level 3 certified. It's key storage mechanism is also RF shielded and tamper-proof. It's probably one of the fastest cards on the market for accelerating SSL, IPsec/IKE and other general crypto and it's inexpensive (less than $1500 list).

http://www.sun.com/products/networking/sslaccel/suncryptoaccel6000/details.xml

Monday, June 11, 2007

ld: fatal: symbol is multiply-defined:

ld: fatal: symbol `bar' is multiply-defined:
(file foo.o and file bar.o);
ld: fatal: File processing errors. No output written to int.o

foo.c and bar.c have conflicting definitions for the symbol bar. Because the link-editor cannot determine which should dominate, the link-edit usually terminates with an error message. You can use the link-editor's -z muldefs option to suppress this error condition, and allow the first symbol definition to be taken.

resolve with compiling flag with

-z muldefs

LDOM Items

(1) SPARC only for Niagara Platforms Solaris DOM0 and Any OSs DOMU
(2) x86, Solaris DOM0, and any OS domU
(3) x86, other OS DOM0 is not verified yet.

Sunday, June 10, 2007

fatal: relocation error: R_AMD64_PC32

I am developing a c simulation tool. I am invoking one of dynamic library
(my home grown library named as: randlib.so)

It has a function as

double unifrand(double, double, long*);

In my simulation application, I invoke the above function with

seed = 1236537;
ran_no = unifrnd(0.0,1.0,&seed);

I have flag below to build

cc -m64 -o dist/Debug/Sun12-Solaris-x86/simperturbation build/Debug/Sun12-Solaris-x86/estimator.o build/Debug/Sun12-Solaris-x86/spoptimze.o -R/SunStudioProjects/randlib/dist/Debug/Sun12-Solaris-x86 -R/usr/sfw/lib/64 -lm /SunStudioProjects/randlib/dist/Debug/Sun12-Solaris-x86/randlib.so


However, during runtime, I have error blow:

ld.so.1: simperturbation: fatal: relocation error: R_AMD64_PC32: file /SunStudioProjects/randlib/dist/Debug/Sun12-Solaris-x86/randlib.so: symbol unifrnd: value 0x2800112fedf does not fit


If I run list dynamic dependencies with the target simulation application binary,

ldd simperturbation
libm.so.2 => /lib/64/libm.so.2
randlib.so => /SunStudioProjects/randlib/dist/Debug/Sun12-Solaris-x86/randlib.so
libc.so.1 => /lib/64/libc.so.1


As I compiled the DLL with -Kpic flag. It works

The URL below helps.

http://blogs.sun.com/rie/entry/my_relocations_don_t_fit

binary & library Info and In question

(1) file
(2) ldd

Stochastic Systems

governing variables -> system behavior

random variables -> stochastic system

large number: Not large until the change does not impact the system behavior

Complier flag for simulation Tool

1. Flag for 64 bit memory model of x86 platform architecture
2. Dynamic Library
3. Dynamic Link
4. Math Library
5. Runtime Search Path
6. -I include directory

cc -m64 -c -g +w -I/SunStudioProjects/randlib -o build/Debug/Sun12-Solaris-x86/spoptimze.o spoptimze.c

cc -m64 -o dist/Debug/Sun12-Solaris-x86/simperturbation build/Debug/Sun12-Solaris-x86/estimator.o build/Debug/Sun12-Solaris-x86/spoptimze.o -R/SunStudioProjects/randlib/dist/Debug/Sun12-Solaris-x86 -lm /SunStudioProjects/randlib/dist/Debug/Sun12-Solaris-x86/randlib.so

Complier flag for simulation Tool

1. Flag for 64 bit memory model of x86 platform architecture
2. Dynamic Library
3. Dynamic Link

cc -m64 -o dist/Debug/Sun12-Solaris-x86/simperturbation build/Debug/Sun12-Solaris-x86/estimator.o build/Debug/Sun12-Solaris-x86/spoptimze.o -R/SunStudioProjects/randlib/dist/Debug/Sun12-Solaris-x86 -lm /SunStudioProjects/randlib/dist/Debug/Sun12-Solaris-x86/randlib.so

Saturday, June 09, 2007

Social Network

CEpedia and Koda are typical Sun Social network

NIO or IO

It seems block or non-blocking address the FD limitation previously existing on the server. There is a work around to improve the FD limitation. However, from server socket processing with sequential by nature. I tend to suggest to take blocking IO for normal mid-large workload but small-mid workload take unblocking approach since it functionally replicate Queuing algorithms

Inetmenu good for Laptop

It is a good tool to assign IP for laptop

It requires root to run then get IP from DHCP server or Wireless Access Points.

S11 NV build Laptop

(1). Solaris Express, Developer version build installation
(2). If it is under SWAN, it will bypass a lot of setup to go straight with default
NIS process.
(3). It requires sys-unconfig process if you want change other things. Do not enable DHCP here
but use inetmenu

However, sys-unconfig will not give you option to change host name

So, it is important to get hostname change process.

To change the hostname on a Solaris system:

1. Change the hostname in /etc/nodename
2. Run uname -S new_hostname to change the nodename for your current session.
3. Change the hostname in /etc/hostname.network_interface (e.g. /etc/hostname.hme0)
4. Run hostname new_hostname to change the hostname for your current session.
5. Change the hostname in /etc/hosts
6. Change the hostname in /etc/net/*/hosts (/etc/net/ticlts/hosts, /etc/net/ticots/hosts, /etc/net/ticotsord/hosts)
for directory in ticlts ticots ticotsord
do
cd /etc/net/$directory
sed 's/old_hostname/new_hostname/g' hosts > hosts.new
mv hosts.new hosts
done
Solaris 7 or later additional instructions:
7. Change the hostname in DUMPADM_SAVDIR= line in /etc/dumpadm.conf

Solaris 10 additional instructions:
8. Change the hostname in /etc/inet/ipnodes



However, all x86 updates. should visit community software (csw package download)
http://www.blastwave.org/packages.php

blastwave.org


This includes wget, pkg-get, tetex etc.

WTS 2007 Paper Publication

Exchange information on advances in mobile communications and wireless networking technology, management, applications, and security in a very pleasant Southern California conference environment with leaders and experts from industry, governmental agencies, and universities around the world at the Wireless Telecommunications Symposium.

WTS 2007 will focus on The Future of Wireless Communications. Planned highlights of WTS 2007 include:

* An IEEE Communications Society Co-Sponsored Welcoming Dinner with Internet Pioneer Vinton G. Cerf, Vice President and Chief Internet Evangelist at Google, as guest speaker
* Addresses and presentations by some of the most respected executives and researchers in the wireless communications industry
* Panel discussions including Future Directions in Wireless Communications Research, Wireless Network Security, New Wireless Communications Ventures, Wireless Communications Investments, Mobile Wireless Services and Business, Wireless Communications Business Strategy, Advances in Satellite Communications, and The Future of Deep Space Communications
* A tutorial on Portable Emergency Networks and a Wireless Network Security Workshop
* Presentations of accepted academic and practitioner applied research papers; a poster paper session; a doctoral students session
* A tour of Universal Studios Hollywood followed by a reception at CityWalk


Peer-reviewed proceedings will be published by the IEEE and will be available on its Xplore online publication system. A CD containing the invited speakers' presentations and accepted applied research papers will be distributed to registrants at the conference. Applicable student papers are welcome. Awards will be given for the outstanding undergraduate and graduate papers submitted.


http://www.csupomona.edu/~wtsi/doc/WTS_2007-Accepted_Paper_Program.htm


My Paper on Lock contention is located by IEEE Explorer

ICACT2007 Paper publication

http://www.icact.org/program/program.asp#3


It is Feb 2007



My Paper stated as Index and SOA Performance


Please visit IEEE Explorer