Scott Binegar
Tom Donaldson
Rob Douglas
Niall Gaffney
Karla Peterson
Frank Tanner
Frank distributed WBS information for this project.
Frank created an APT-AI discussion list.
The normal APT-AI team meeting will be held on Mondays from 1pm until 3pm in room 112 (first floor of the Muller Building).
Scott & Rob each distributed a short write-up on distributed computing technologies. They are included as an appendix at the end of the minutes.
Frank distributed an e-mail regarding tools that the APT architecture should support (an excerpt from the Tool of the Future article). It is included at the end of this document in the appendix as well.
Rob will write up something regarding the use of CORBA and his high level design of the APT communication infrastructure.
none
First, we discussed last meeting’s administrative action items.
The bulk of the meeting was used to discuss distributed computing options. Initially, RMI (remote method invocation) was discussed. The following describes the highlights of RMI:
Conclusions:
We should encapsulate the communications mechanism to allow for using different technologies. This will give us the ability to plug in different communications mechanisms where appropriate. Overall, the architecture should not preclude any communications mechanism. Rob will write up his idea for an overall high-level design for this type of architecture.
Do we need to architect for communications with a legacy batch processing systems? Rob feels this may limit our architecture. Connections with batch processes should be the exception not the rule.
We may not architect for direct support of batch processing. However, we will need to communicate with some sort of legacy batch system in the short term. A wrapper could be used to perform this functionality without impacting the overall APT architecture.
The majority of our applications will still run on a Solaris/Unix environment. However, we should not preclude the use of Windows or Linux. We should be able to communicate to any platform to any platform.
Subject: StarView II RMI Interface
Date: Wed, 26 Jan 2000 16:27:48 -0500
From: "Scott Binegar" binegar@stsci.edu
To: <apt-ai@stsci.edu>
The following is a brief description of StarView II's network connections
with an explaination of how it uses the Java Remote Method Invocation (RMI)
interface.
StarView II actually has four separate network connections that it makes in
order to run properly. One is an HTTP connection, which it uses for
communication to the Quick server (this is the SQL generator) and to access
various setup files from its main server. Two other connections are socket
connections to databases (the main HST catalog and a DDL catalog). The
fourth connection is to a FormServer, and this is the Java RMI connection.
The FormServer runs as an application on our main server (right now, a Sun
Unix box running SunOS 5.6). The function of the FormServer is to supply
the StarView client with: 1)file directory trees of available saved forms,
and 2)remote form objects.
When the FormServer starts, it starts an RMI registry and registers itself
as an available service. The StarView client knows exactly where the
FormServer service is registered, and gets a handle to the process via the
Naming.lookup method. Once the connection is established, the StarView
client can access any of the public methods defined by the
FormServerInterface.
The FormServerImpl class defines the FormServer methods available to the
StarView client via the RMI interface. This class is compiled with the RMI
compiler, rmic, which produces the FormServerImpl_Stub.class file (for the
server) and the FormSesrverImpl_Skel.class file (for the client).
In both the StarView client and the FormServer server, an RMI Security
Manager must be set. This then requires that the client and server have
Java policy files that allow the applications access to the RMI sockets.
One problem with RMI is in the possibility that you may run into trouble if
the client is trying to connect to the server through a firewall. If the
firewall only allows SMTP and HTTP connections, then you may have to connect
your RMI client to an HTTP server which would then call a cgi application
that would then connect to the RMI server. Not real easy, and your
performace will probably be degraded by a factor of ten (see reference
below, Downing, Appendix A, RMI Security).
The StarView client allows the user to select a different form server
connection than the default. So, if the user knew that there was a form
server running at ECF, for instance, then they could switch to the URL of
that form server in the environment setting of the StarView client.
The Java Developers Connection (JDC) has a good on-line tutorial about RMI
at:
http://java.sun.com/docs/books/tutorial/rmi/index.html
Also, the following books are good references:
"Thinking in Java", by Bruce Eckel, chapter on network programming and RMI
"Java RMI", by Troy Downing
Hope this is of some use,
Scott
Subject: APT AI: Distributed COmputing
Date: Mon, 31 Jan
2000 11:58:47 -0500 (EST)
From: Rob Douglas <rdouglas@stsci.edu>
To: apt-ai@stsci.edu
Distributed Computing
=====================
Needs of the APT
----------------
The APT needs some level of Distributed COmputing. There are three levels:
Remote Processing
Distributed on the same cluster
Full Distributed
I think that the APT needs to be as simple as possible. The field of Full
Distributed processing is still in the research phases. Distributed on the same
cluster does not offer opportunities for running tools that exist only at one
site (true servers). I think that the best option is Remote Processing, where
the client does need to know some minimal information that the
Capabilities of the DOC
-----------------------
The DOC uses CORBA to connect objects called Services into the DOC system. A
Service implements a Run() method and then any specific methods that are agreed
upon between the DOC and the Service. There are three types of services
provided:
Lisp Service
Java Service
Wrapped Service
A Lisp service is instantiated by executig a LISP image with a command-line
argument telling it what the IOR of the Server is so that it can listen for
connections.
Java services can be instantiated in-process if running on the same machine.
Wrapped Services allow us to run any command as if it was being run natively -
even on a remote machine.
Recommendation
--------------
CORBA offers a consistent way of creating and maintaining interfaces with other
systems. We have some serious risks of poor implementations, but the
architecture suffers very little I think from a CORBA implementation.
More info
---------
See the DOC web page at
http://www-int.stsci.edu/apsb/doc/DOC/DOC.html
specifically the first section.
ROB
--
AI Software Systems Engineer
ESS/APST
Subject: My APT-AI Action Item
Date: Thu, 27 Jan 2000 15:41:20 -0500
From: frank tanner <tanner@stsci.edu>
To: apt-ai@stsci.edu
I was supposed to determine what kinds of tools APT should
encompass/connect to. This research was already done by the Tool of the
Future working group (thanks to Karla for pointing this out). I have
included section 3.3 of the Tool of the Future recommendation. It's
lengthy; however, I think the information presented is very valuable.
Please review it and we will discuss some of its content at Monday's
meeting.
Please contact me if you have any preliminary questions or concerns.
Thanks,
-Frank
3.3 Tools we would like in APT In determining the tools and
functionalities that we would like to have in the APT environment we not
only considered our high level goals, but also evaluated the tools for
the impact that they would have on both the scientific community as well
as observatory staff. By and large, the tools had to provide large
improvement over RPS2 tools. Some tools have been suggested to complete
the integrated environment so that the whole pro-posal submission
process is possible in a single environment. The following lists the
tools and functionalities that we consider as the basic set we would
like to have in APT. The tools can be extended to provide more detailed
functionality as both the tool and the envi-ronment mature.
1. Visual Target Tuner (VTT) - We would like to prepare the prototype
SEA VTT for operational release and use. Currently there is no tool that
allows proposers to visually determine the exact field of view that is
appropriate for their science. Availability of such a tool would not
only provide proposers with information earlier in the proposal
preparation process, but it will also reduce observatory staff effort
that is presently being spent on iterating over details of a proposal
with the proposer (see the example in section 2.2). At present, the SEA
VTT does not provide useful information concerning available guide
stars. We con-sider this a promising functionality to be developed.
Other candidates for improved functionality are access to data sets in
archives, display of offset patterns, bad pixel information, connection
to IRAF/STSDAS, improved access to target catalogues and lists, and
ability to represent spectral lines, grisms and coronography.
2. Exposure Time Calculators (ETCs) - Web-based ETCs already exist and
are extensively used by observers. The prototype SEA ETC tool is the
next step towards ETCs that provides users with the capability to
effectively explore the available parameter space. We would like to have
such an interactive ETC in the APT environment. The ETC is an important
tool to integrate into the environ-ment as it can provide easy access to
a functionality that is always being used by proposers as they develop
observations. A logical choice would be the SEA ETC for operational
release and use.
3. Phase I Submission Form - We would like to provide Phase I proposers
with a web based electronic form to simplify the submission process.
This will likely be implemented using the SEA proposal definition forms.
4. Exposure Planner - Presently all users expend a lot of resources in
laying out their exposures in the allocated orbits. This task in RPS2 is
time consuming and frustrating. We would like to continue to develop the
prototype Exposure Plan-ner developed by the SEA group that displays
exposures as they will be executed within orbits. It allows manipulation
of exposure times and ordering with instan-taneous updates of overhead
information. This will make it easier for observers to lay out their
orbits without time-consuming iteration with RPS2. The earliest versions
will be based on a rough estimate of overhead times matching those
described in the Phase I Call For Proposals. As the TransVERSE project
matures, we would like to implement the capability to connect to
TransVERSE and receive far more accurate information including inserted
parallel observations and buffer management. We would also like to
explore allowing users access to a detailed breakdown of overhead
components. Once later phases of TransVERSE are complete we would like
to implement an optimizer under the control of the observer which uses
TransVERSE's search capabilities to improve the efficiency of the
observing program.
5. Bright Object Checker - Bright object checking is an essential part
of our Phase II process which directly affects the health and safety of
our instruments. At present, observatory staff do all the bright object
checking (often manually) and once again spend time on iterating over
details with the proposer. We would like our software to provide
information about bright objects to observers. This will help decrease
the amount of work done at STScI after submission to address bright
object issues. Since the New Guide Star System (NGSS) is the most
accu-rate source of bright object information, this capability is likely
to be imple-mented via a connection to NGSS.
6. Visit Planner - We would like to develop a graphical tool that allows
observers to visualize timing relationships between visits (e.g. BEFORE,
AFTER, GROUP WITHIN) and to better understand unschedulable situations.
At present there is no visualization of timing links and other
schedulability information. between observations to determine the effect
of a change on the rest of the pro-gram. A connection to the Spike
system will allow such a tool to provide instan-taneous schedulability
feedback.
7. Canned Observing Strategies - We would like to automate the process
of apply-ing customizable observing strategies to observing programs
(e.g. mosaicing). In RPS2 such a task is cumbersome.
8. Import of Data from Archives - We would like to be able to import
details of an observing program from any of a number of mission archives
to be used to form a new proposal. The goal here is to support the
proposal development process by allowing observers to graphically
visualize data. We would most likely start with the Hubble archive and
add access to other archives as feedback and feasibility indicate.
9. Improved Software Updates - We would like to improve the way
observers have access to the latest data on the state of the
observatory. We need a strategy that will allow up-to-the-minute access
to operational and hardware changes, but that also supports those who
wish their environment to remain stable while they compare the results
of scientific trade-offs.
10.Tight Integration with Online Documentation - We would like to couple
our automated tools with online documentation so that information on any
part of the system is easy to find.
11.Access to Execution Data - We would like our observer tools to be
able to access operational data. This would be useful, for example, in
making schedula-bility determinations based on exactly when observations
have executed or will execute. Such a capability would reduce effort to
implement proposals at STScI by decreasing the incidence of
unschedulable observations due to execution information now unavailable
to RPS2.
12.Grouping Observations for Global Update - We would like our tool to
allow a proposer to group observations to perform a single update to all
of them, such as a filter change or new target. If, for example, an
observer finds out at a late stage that a planned target is infeasible,
it should be easy to substitute another target without a great deal of
search-and-change effort.