Thursday, September 28, 2006

Database Research

(7) Parallelism for traditional small and mid data set is ok. However, for large data set it may be overkilled in interms of paralleling each sub queries for large volume of co-current accessing.
(8) How to speed up the data accessing of archieving data does parallelization work ? Does archiving have indexing ? if not a sequential scan is required, can this be done over parallelization ?
(9) How to move large amount of data throughout the memory hierarchy of parallel computers ?
(10) Future system needs to deal with search whose part of data does come from archives
(11) Current data storage is used as read/write cache. New algorithm is required for the 3 level system buffering management
(12) Current Tx model is good for short Tx. However, for long run Tx, We need entire new approach to handel data integrity and recovering
(13)Space efficient Algorithm for Versioning and configuration model for DB to handle versions of objects
(14) Extend existing data model to include much more semantic information of data.
(15) Browsing with interrogation the nature of the process that merge data for hetergenerous and distributed database
(16) Current distributed DBMS algorithm for query processing, cocurrency control and support for multiple copies were designed for a few sites. They must be rethink for 1000, 10000 sites
(17) local cache, local replication of remote desktop become important, efficient cache maintenance is an open problem.
(18)

O Page and manual SSO login

Implementing a POC for a customer. For this POC we're trying to automate the complete authentication process within AM. We've written a servlet that is deployed in the same war-file (and context) as Access Manager and that handles authentication (using com.sun.identity.authentication.authcontext and is creating the token (using com.iplanet.sso.SSOTokenManager): so we don't redirect to /UI/Login if the SSOToken is invalid!

After establishing the session we want to redirect the user to a site that is protected by a policy agent (using response.redirect(targetUrl)). However, SSO fails and a user needs to authenticate again. It seems that the normal AM cookies (iPlanetDirectoryPro - created when you login using /UI/Login) are not automatically created.

One final thing: setup is okay - we did sanity checks using policy agents and that works fine.

Questions:
1. Can some give me some hints and tips on how to create a valid session, SSO token and the according cookies using just the API?

The expected usage of this kind of flow is ideally through a policy
agent protecting a resource,
which detects missing SSOToken and authenticates on its own. Looks like
you are trying to do
that automatically without user intervention. In that case you can use
zero page login ( more details
in auth arch document pg 24-26), so you dont have to worry about setting
domain cookies etc.

In your approach you would have to set the cookie yourself on the
response. sample code to do that may
be like:

try {
ServiceSchemaManager scm = new ServiceSchemaManager(
"iPlanetAMPlatformService", token);

ServiceSchema platformSchema = scm.getGlobalSchema();
Set cookieDomains = (Set)platformSchema.getAttributeDefaults().
get("iplanet-am-platform-cookie-domains");
String value = token.getTokenID().toString();
String cookieName = SystemProperties.get(
"com.iplanet.am.cookie.name");

Cookie cookie = CookieUtils.newCookie(cookieName, value,
"/");
response.addCookie(cookie);

Iterator iter = cookieDomains.iterator();
Cookie cookie = null;
while (iter.hasNext()) {
String cookieDom = (String) iter.next();
cookie =
com.iplanet.services.util.CookieUtils.newCookie(cookieName, value,
"/", cookieDom );
response.addCookie(cookie);
loadBalancerCookie = setlbCookie(cookieDom);
if (loadBalancerCookie != null) {
response.addCookie(loadBalancerCookie);
}
}
}
} catch (Exception e) {

}
}

JES MF Reference

Even if technical, a good starting point is the JES-MF engineering site at
http://twiki.france/twiki/bin/view/JESMF20/WebHome

JES UWC Health check via Layer 7 Switch

In a typical JES Communications Suite installation, we install Communications Express (also known as UWC) to provide a web interface for Mail, Calendar and Address Books.

UWC is a Web Application running in a web server. And UWC is relying on the HTTP interface provided by the Messaging Server (not in web server but specific daemon: mshttpd) to display some pages.
Both processes are binding on the same IP address but UWC is using port 80 and mshttpd is using port 81.

The problem I'm facing is how to link these 2 applications in the N2120 configuration. Because if mshttpd is down, users are still redirecting to the running UWC on the same box but as mshttpd is not running, some pages (after login) cannot be displayed and the message displayed in the browser is "Bad Gateway. Processing of this request was delegated to a server not functioning properly".

To Do:

The problem lies in the fact that UWC returns a HTTP status code 200 OK. What you need to do is create a check that checks for a certain string,instead of the http status code.

How to install another instance JDK with strong encry policy

How to install another instance JDK with strong encry policy

* downloaded JDK 1.4.2 from
http://java.sun.com/j2se/1.4.2/SAPsite/download.html (64bit)

* unpack to /opt

* create a softlink from /opt/j2sdk1.4.2 to /opt/java1.4

* installed the policy manually in /opt/java1.4

* mount /opt as lofs

* start sapinst

Sapinst will detect, that the policy is already there and will not try to
install it again.

Wednesday, September 27, 2006

JDK access issue from sparse zone

In the Global zone, there is already a copy of JDK installed (by default
in Solaris 10). All the java links are setup properly in /usr.
However, as this is a sparse zone, /usr is inherited i.e. read-only.
Installing JDK anywhere in the sparse zone, while solves the problem,
will still require the user to change the appropriate links/PATHs/etc to
ensure the right JDK gets called.

Sunday, September 24, 2006

USDT per JScript

Java Script with DTrace


http://blogs.sun.com/brendan/entry/dtrace_meets_javascript

System vendor configuration--- CRITICAL vs OPTIMAL

It does not limited to disk but
apply to any key performance measurement within
a system.

As a system vendor, we need to consider ISV and
even end user vertical work load, system architecture
and deployment consideration from data center
operation point of view. To do so, we can make
a realistic assessment on total cost of ownership
at the end point. It is good for competitive analysis
at the end point and architecture selection at end
user level.

However, I am wondering if it is required for a system
vendor to implement a end-to-end HW configuration or
stay at a critical point but leave the further specific
HW and SW HA deployment as alternatives ?

Specifically, I tend to think we need to provide CRITICAL
instead of OPTIMAL configurations in order to leave
flexible and overheads to end deployment to make a
choice.

Regarding to CRITICAL vs OPTIMAL, we can classify the
default configurations so that systems meet customer
demands.