Developer guide

This part contains all technical documentation about JNode. This part is intended for JNode developers.


This chapter is a small introduction to the technical documentation of JNode.
It covers the basic parts of JNode and refers to their specific documentation.

JNode is a Virtual Machine and an Operating System in a single package. This implies that the technical documentation covers both the Virtual Machine side and the Operating System side.

Besides these two, there is one aspect to JNode that is shared by the Virtual Machine and the Operating System. This aspect is the PluginManager. Since every module in JNode is a Plugin, the PluginManager is a central component responsible for plugin lifecycle support, plugin permissions and plugin loading, unloading & reloading.

The picture below gives an overview of JNode and its various components.
An overview of JNode's architecture

It also states which parts of JNode are written in Java (green) and which part are written in native assembler (red).

As can be seen in the picture above, many parts of the class library are implemented using services provided by the JNode Operating System. These services include filesystems, networking, gui and many more.

Getting the sources

For developing JNode you first need to get the sources. There are basically 3 possible ways to get them with different advantages and disadvantages. These possibilities contain:

  • Getting the sources from our nightly build server. You can find a tar.bzip2 file from here. This is the fastest way to get the sources, but you should not use it for development as it makes it hard to update your local sources once modified.
  • The second possibility is to use SVN, which is our main repository. SVN is easy to use but is a bit slow on the one hand and on the other hand you need access permissions to commit. For getting access permission one has to proove he's able to follow the JNode guidelines.
  • For new developers the recommended way is to use git. We have a git repository that is kept in sync and makes it easy to create patches against trunk.

Have a look at the subpages for a more detailed description of the commands.

SVN Usage (Deprecated : we have moved to GitHub)

This page is deprecated since we have moved to GitHub

This is a slight overview of SVN and how to use it. First of all there are three ways to access SVN: svn, svn+ssh and via https. Sourceforge uses WebDAV, that means you can also browse the repository online with your favorite browser, just click on this link.

Subversion uses three toplevel directories named trunk, branches and tags. Trunk can be compared to CVS Head, branches and tags are self-explanatory, I think Smiling

To checkout the source simply type:
svn co jnode
which creates a new directory called jnode (Mind the space between "trunk/" and "jnode"!) . In this directory all stuff of the repository in /jnode/trunk will be copied to jnode/ on your local computer.

svn up|add|commit as expected.

New to subversion is copy, move and delete. If you copy or move a file, the history is also copied or moved, if you e.g. delete a directory it will not show up anymore if you do a new checkout.

If you want to make a branch from a version currently in trunk you can simply copy the content from trunk/ to branches/, e.g. by:
svn copy

I think that's the most important for the moment, for more information have a look at the SVN Handbook located here.

Btw, for using SVN within eclipse you have to install subclipse located here.

Using Git (at GitHub)

The URLs for official git repository at GitHub are listed below :

SSH: [email protected]:jnode/jnode.git

For those who know what they are doing already and simply want push access, refer to the page on setting up push access. For those that are unfamiliar with git, there are a few git pages below that explain some of the common tasks of setting up and using git. This of course is not meant to be a replacement for the git manual.

Git Manual:
Git Crash Course for SVN users:

Setting up push access

In order to gain push access to the repository you will have to create a username on the hosting site and upload a public ssh key. Have the key ready as it will as for it when you sign up. If you have a key already you can register your username here and give your username, email and public key. There's no password involved with the account, the only password is the one you put on your ssh key when you create it, if you choose to do so. Its not like there is sensitive material involved, so dont feel compelled to use a password.

In order to generate an ssh key will require ssh be installed. Most linux distributions will have this already. Simply type in:


and your key will be generated. Your public key will be in ~/.ssh/ in a file with a .pub suffix, likely or Open the file in a text editor (turn off line wrapping if its enabled), and copy/paste the key into your browser. Its important that the line not be broken over multiple lines.

Once your account has been created, send an email to the address found by the Owner tag on the jnode repo website, which is here. Once you are added you will need to configure your git configuration to use the new push url with your username.

When you originally cloned the repository the configuration setup a remote named origin that referenced the public repo using the anonymous pull url. We'll now change that using git-config.

git config remote.origin.url [email protected]:[user]/jnode.git

Of course replacing [user] with your username, which is case sensitive.

Now you should be setup. Please see the page on push rules and etiquette before continuing.

Setting up your local git repo

The first thing you want to do, obviously, is install git if its not already. Once git is intalled we need to clone the public repository to your local system. At this time of this writing this requires about a 130MB download.

First, position your current directory in the location where you want your working directory to be created. Don't create the working directory as git will refuse to init inside an existing directory. For this example we will clone to a jnode directory in ~/git/.

cd ~/git
git clone [email protected]:jnode/jnode.git jnode

Once this has finished you will have a freshly created working directory in ~/git/jnode and the git repository itself will be located in ~/git/jnode/.git For more info see Git Manual Chapter 1: Repositories and Branches

This process has also setup what git refers to as a remote. The default remote after cloning is labeled as origin and it refers to the public repository. In order to keep your repository up to date with origin, you will have to fetch them. See Updating with git-fetch for more info.

When fetch pulls in new objects, you may want to update any branches you have locally that are tracking branches on origin. This will almost always be true of the master branch, as it is highly recommended that you keep your master branch 'clean' and in sync with origin/master. It's not necessary, but it may make life easier until you understand git more fully. To update your master branch to that of origin/master simply

git rebase origin master

Then if you wish to rebase your local topic branches you can

git rebase master [branch]

The reason we're using git rebase instead of git merge is because we do not generally want merge commits to be created. This is partly to do with the svn repo that commits will eventually be pulled into. svn does not handle git merges properly, as a git merge object has multiple parent commits, and svn has no concept of parents. Where git employs a tree structure for its commits, svn is more like a linked list, and is therefore strictly linear. This is why its also important to fetch and rebase often, as it will make the transition of moving branches over to the svn repo much easier.

To learn more about branches refer to the git manual. It is highly recommended that new users to git read through chapters 1-4, as this explains alot of how git operates, and you will likely want to keep it bookmarked for quick referencing until you get a handle on things.

For those users that find the git command line a bit much, there is also `git gui` that is a very nice tool. It allows you to do alot of the tasks you would do on the command line via gui. There is also an eclipse plugin that is under development called egit, part of the jgit project implementing a pure java git implementation.

Git Etiquette

Once you have push access to the public git repo, there are a few simple rules i'd like everyone to observe.

1) Do not push to origin/master
This branch is to be kept in sync with the svn repo. This branch is updated hourly. When it is updated, any changes made to it will be lost anyway as the udpate script is setup in overwrite mode. Even with this, if someone goes to fetch changes from origin/master before the update script has had a chance to bring it back in sync, then those people will have an out of sync master, which is a pain for them. To be on the safe side, when pulling from origin master, it doesnt hurt to do a quick 'git log origin/master' before fetching to see if the commit messages have a git-svn-id: in the message. This is embedded by git for commits from svn. If the top commits do not have this tag, then someone has pushed into origin/master.

2) Do not push into branches unless you know whats going on with it.
If you have a branch created on your local repo and you would like to have your changes pulled upstream then push your branch to the repo and ask for it to be pulled. You can push your branch to the public repo by

git push origin [branch]

so long as a branch by that name does not already exist. Once the changes have been pulled the branch will be removed from the public repo. That is unless the branch is part of some further development. You will still have your branch on your local repo to keep around or delete at your leisure.

3) Sign-off on your work.
Although git preserves the author on its commits, svn overwrites this information when it is commited. Also it is your waying of saying that this code is yours, or that you have been given permission to submit it to the project under the license of the project. Commits will not be pulled upstream without a sign-off. The easiest way to set this up is to configure git with two variables.

git config [name]
git config [email]

Then when you go to make your commit, add an -s flag to git commit and it will automatically append a Signed-off-by: line to the commit message. This is not currently being enforced project wide, although it should be. Also if someone sends you a patch, you can add a Created-by: tag for that person, along with your own sign-off tag.

Configuration process

JNode has a number of configuration options that can be adjusted prior to performing a build. This section describes those options, the process of configuring those options, and the tools that support the process.

JNode is currently configured by copying the "" file to "" and editing this and other configuration files using a text editor.

In the future, we will be moving to a new command-line tool that interactively captures configuration settings, and creates or updates the various configuration files.

The Configure tool

The Configure tool is a Java application that is designed to be run in the build environment to capture and record JNode's build-time configuration settings. The first generation of this tool is a simple command-line application that asks the user questions according to an XML "script" file and captures and checks the responses, and records them in property files and other kinds of file.

The Configuration tool supports the following features:

  • Property types are specified in terms of regexes or value enumerations.
  • Properties are specified in terms of a property name and type, with an optional default value.
  • Property sets are collections of properties associated with files:
    • They are typically loaded from the file, updated by the tool and written back to the file.
    • Properties can be expanded into templates XML and Java source files as well as classic Java properties files.
    • A "FileAdapter" API allows new file formats to be added as plugin classes.
  • Property values are captured in "screens" consists of a sequence of "items" which define the questions that are presented to the user.
    • Each "item" consists of a property name and a multi-line "text" that explains the property to the user.
    • A screen can be made conditional, with a "guard" property that determines whether or not the properties in the screen are captured.
  • A configuration script file can "import" other script files, allowing the configuration process to modularised. All relative pathnames in scripts are resolved relative to the script file that specifies them.

The configuration tool is launched using the "" script:

   $ ./

When run with no command arguments as above, the script launches the tool using the configuration script at "all/conf-source/script.xml". The full command-line syntax is as follows:

   ./ --help
   ./ [--verbose] [--debug] <script-file>

The command creates and/or updates various configuration files, depending on what the script says. Before a file is updated, a backup copy is created by renaming the existing file with a ".bak" suffix.

Configuration script files

The Configure tool uses a "script" to tell it what configuration options to capture, how to capture them and where to put them. Here is a simple example illustrating the basic structure of a script file:

  <type name="integer.type" pattern="[0-9]+"/>
  <type name="yesno.type">
    <alt value="yes"/>
    <alt value="no"/>

  <propFile name="">
    <property name="prop1" type="integer.type"
              description="Enter an integer"
    <property name="prop2" type="yesno.type"
              description="Do you want to?"

  <screen title="Testing set 1">
    <item property="prop1"/>
    <item property="prop2"/>

The main elements of a script are "types", "property sets" and "screens". Lets describe these in that order.

A "type" element introduces a property type which defines a set of allowed values for properties specified later in the script file. A property type's value set can be defined using a regular expression (pattern) or by listing the value set. For more details refer to the "Specifying property types" page.

A "propFile" element introduces a property set consisting of the properties to be written to a given property file. Each property in the property set is specified in terms of a property name and a previously defined type, together with a (one line) description and an optional default value. For more details refer to the "Specifying property files" page.

A "screen" element defines the dialog sequence that is used to request configuration properties from the user. The screen consists of a list of properties, together with (multi-line) explanations to be displayed to the user. For more details refer to the "Specifying property screens" page.

Finally, the "Advanced features" page describes the control properties and the import mechanism.

Specifying property types

Configuration property types define sets of allowable values that can be used in values defined elsewhere in a script file. A property type can be defined either using a regular expression or by listing the set of allowable values. For example:

  <type name="integer.type" pattern="[0-9]+"/>
  <type name="yesno.type">
    <alt value="yes"/>
    <alt value="no"/>

The first "type" element defines a type whose values are unsigned integer literals. The second one defines a type that can take the value "yes" or "no".

In both cases, the value sets are modeled in terms of the "token" character sequences that are entered by the user and the "value" character sequences that are written to the property files. For a property types specified using regular expressions, the "token" and "value" sequences are the same, with one exception. The exception is that a sequence of zero characters is not a valid input token. So if the "pattern" could match an empty token, you must define an "emptyToken" that the user will use to enter this value. For example, the following defines a variant of the previous "integer.type" in which the token "none" is used to specify that the corresponding property should have an empty value:

  <type name="optinteger.type" 
        pattern="[0-9]*" emptyToken="none"/>

For property types specified by listing the values, you can make the tokens and values different for any pair. For example:

  <type name="yesno.type">
    <alt token="oui" value="yes"/>
    <alt token="non" value="no"/>

Type values and tokens can contain just about any printable character (modulo the issue of zero length tokens). Type names however are restricted to ASCII letters, digits, '.', '-' and '_'.

Specifying property files

A "propFile" element in a script file specifies details about a file to which configuration properties will be written. In the simplest case, a "propFile" element specifies a file name and a set of properties to be written. For example:

  <propFile fileName="">
    <property name="jnode.vm.size"
                 description="Enter VM size in Mbytes"
    <property name="jnode.vdisk.enabled" type="yesNo.type"
              description="Configure a virtual disk"

This specifies a classic Java properties file called "" which will contain two properties. The "jnode.vm.size" property will have a value that matches the type named "integer.type", with a default value of "512". The "jnode.vdisk.enabled" will have a value that matches the "yesno.type", defaulting to "no".

The Configure tool will act as follows for the example above.

  1. It will test to see if the "" file exists in the same directory as the script file.
  2. If the file exists, it will be read using the java.util.Properties.load method, and the in-memory property set will be populated from the corresponding properties.
  3. If the property file does not exist, the in-memory property set will be populated from the "default" attributes.
  4. The "screen" elements will be processed as described in the "" page to capture new property values.
  5. Finally, the "" file will be created or updated using the method.

Attributes of a "property" element
Each "property" element can have the following attributes:

This attribute gives the name of the property. Property names should be restricted to ASCII letters, digits, '-', '-' and '_'. This attribute is mandatory.
This attribute gives the name of the property's type. This attribute is mandatory.
This attribute gives a short (20 chars or so) description of the property that will be included in the prompt for the property's value. This attribute is mandatory.
This attribute gives a default value for the property if none is supplied by other mechanisms. This attribute is optional, but if present it must contain a valid value for the property's type.

Attributes of a "propFile" element
The Configure tool will read and write properties in different ways depending on the "propFile" element's attributes:

This attribute specifies the name of the file to be written. Depending on the other attributes, it may also be a source of default values. This attribute is mandatory.
This attribute specifies various alternative file formats. Possible values are listed below.
This attribute specifies that default property values should be loaded from a default property file.
This attribute specifies that the output file should be written by expanding the supplied template file, as described below.
This attribute specifies an alternative marker character for template expansion; the default is '@'.

Alternative file formats

As described above, the Configure tool supports five different file types: more if you use plugin classes. These are as follows:

This denotes a classic Java properties file, as documented in the Sun javadocs for the java.util.Properties class.
This denotes a XML Java properties file, as documented in the Sun javadocs for the java.util.Prfoperties class (Java 1.5 or later).
This denotes an XML file whose structure is not known.
This denotes a Java source code file.
This denotes an arbitrary text file.
org.jnode.configure.adapter.FileAdapter class. The behavior is as described below

The file types "xml", "java" and "text" require the use of a template file, and do not permit properties to be loaded.

Template file expansion

If Configure uses a java.util.Properties.saveXXX method to write properties, you do not have a great deal of control over how the file is generated. For example, you cannot include comments for each property, and you cannot control the order of the properties.

The alternative is to create a template of a file that you want the Configure tool to add properties to. Here is a simple example:

# This file contains some interesting properties

# The following property is interesting
[email protected]@

# The following property is not at all interesting
[email protected]@

If the file above is specified as the "templateFile" for a property set that includes the "interesting" and "boring" properties, the Configure tool will output the property set by expanding the template to replace "@[email protected]" and "@[email protected]" with the corresponding property values.

The general syntax for @[email protected] sequences is:

    at_sequence ::= '@' name [ '/' modifiers ] '@'
    name        ::= ... # any valid property name
    modifiers   ::= ... # one or more modifier chars

The template expansion process replaces @[email protected] sequences as follows:

  • If the <name> matches a property name in the property set, the sequence is replaced with the named property's value, rewritten as described below.
  • If the @<name>@ does not match a property name in the property set, the sequence is replaced with an empty string. (This is a change from early versions of the tool which left sequence unchanged.)
  • The sequence @@ is replaced a single '@' character.
  • It is an error for an "opening" @ to not have a "closing" @ on the same line.

The template expansion is aware of the type of the file being expanded, and performs file-type specific escaping of properties before writing them to the output stream:

  • The expander for a "properties" file escapes the value according to the Java property file syntax. Three <modifier> values are supported:
    • The '=' modifier causes the property name and value to be expanded; i.e. "<name>=<value>" where the name and value parts are suitably escaped.
    • The '!' modifier (with '=') causes an empty property value to be suppressed by replacing the @[email protected] sequence with an empty string.
    • The '#' modifier (with '=') causes an empty property value to be commented out; i.e. "# <name>=<value>"
  • The expanders for "xmlProperties" and "xml" files escapes the value so that it can be embedded in the text content of an element.
  • The expander for "java" files outputs the value with Java string literal escapes.

Specifying property screens

The "dialog" between the Configure tool and the user is organized into sequences of questions called screens. Each screen is a described by a "screen" element in the configuration script. Here is a typical example:

  <screen title="Main JNode Build Settings">
    <item property="jnode.virt.platform">
The JNode build can generate config files for use with
various virtualization products.
    <item property="expert.mode">
Some JNode build settings should only be used by experts.

When the Configure tool processes a screen, it first outputs the screen's "title" and then iterates over the "item" elements in the screen. For each item, the tool outputs the multi-line content of the item, followed by a prompt formed from the designated property's description, type and default value. The user can enter a value, or just hit ENTER to accept the default. If the value entered by the user is acceptable, the Configure tool moves to the next item in the screen. If not, the prompt is repeated.

Conditional Screens

The screen mechanism allows you to structure the property capture dialog(s) independently of the property files. But the real power of this mechanism is that screens can be made conditional on properties captured by other screens. For example:

  <screen title="Virtualization Platform Settings"
          guardProp="jnode.virt.platform" valueIsNot="none">
    <item property="jnode.vm.size">
You can specify the memory size for the virtual PC.  
We recommended a memory size of least 512 Mbytes.
    <item property="jnode.virtual.disk">
Select a disk image to be mounted as a virtual hard drive.

This screen is controlled by the state of a guard property; viz the "guardProp" attribute. In this case, the "valueIsNot" attribute says that property needs to be set to some value other than "none" for the screen to be acted on. (There is also a "valueIs" attribute with an analogous meaning.)

The Configuration tool uses an algorithm equivalent to the following one to decide which screen to process next:

  1. The tool builds a work-list of the screens in the script. The screens are added to the list in the order that they are encountered by the script file parser.
  2. To find the next screen, the tool iterates over the work list entries, looking for the screen one that satisfies one of the following criteria:
    • a screen with no guard property, or
    • a screen with whose guard property has been set, and
      • has a "valueIs" attribute whose value equals the guard property's value, or
      • has a "valueIsNot" atttibute whose value does not equal the guard property's value.
  3. The selected screen is then removed from the work-list and processed as described previously.
  4. To select the next screen, the tool goes back to step 2), repeating until either the work-list is empty, or none of the remaining screens satisfy the criteria.

Advanced features

The "changed" attribute
The "item" element of a screen can take an attribute called "changed". If present, this contains a message that will be displayed after a property is captured if the new value is different from the previous (or default) value. For example, it can be used to remind the user to do a full rebuild when critical parameters are changed.

Configuration files

The primary JNode build configuration file is the "" file in the project root directory.

Other important configuration files are the plugin lists. These specify the list plugins that make up the JNode boot image and the lists that are available for demand loading in various the Grub boot configurations.

Build process

The build process of JNode consists of the following steps.

  • Compilation - Compiles all java source to class files.
  • Assembling - Combines all class files into a jar file.
  • Boot image building - Preloads the core object into a bootable image.
  • Boot disk building - Creates a bootable disk image.
  • CD-ROM creation (optional) - Creates a bootable CD-ROM (iso) image

Boot image building

When JNode boots, the Grub bootload is used to load a Multiboot compliant kernel image and boot that image. It is the task of the BootImageBuilder to generate that kernel image.

The BootImageBuilder first loads java classes that are required to start JNode into there internal Class structures. These classes are resolved and the most important classes are compiled into native code.
The object-tree that results from this loading & compilation process is then written to an image in exactly the same layout as an object in memory is. This means that the the necessary heap headers, object headers and instance variables are all written in the correct sequence and byte-ordering.
The memory image of all of these objects is linked with the bootstrapper code containing the microkernel. Together they form a kernel image loaded & booted by Grub.

Boot disk building

To run JNode, in a test environment or create a bootable CD-ROM, a bootable disk image is needed. It is the task of the BootDiskBuilder to create such an image.

The bootable disk image is a 16Mb large disk image containing a bootsector, a partition table and a single partion. This single partition contains a FAT16 filesystem with the kernel image and the Grub stage2 and configuration files.

Build & development environment

This chapter details the environment needed to setup a JNode development environment.


JNode has been divided into several sub-projects in order to keep it "accessible". These sub-projects are:

JNode-All The root project where everything comes together
JNode-Core The core java classes, the Virtual Machine, the OS kernel and the Driver framework
JNode-FS The Filesystems and the various block device drivers
JNode-GUI The AWT implementation and the various video & input device drivers
JNode-Net The Network implementation and the various network device drivers
JNode-Shell The Command line shell and several system commands

Each sub-project has the same directory structure:

<subprj>/build All build results
<subprj>/descriptors All plugin descriptors
<subprj>/lib All sub-project specific libraries
<subprj>/src All sources
<subprj>/.classpath The eclipse classpath file
<subprj>/.project The eclipse project file
<subprj>/build.xml The Ant buildfile


JNode is usually developed in Eclipse. (It can be done without)
The various sub-projects must be imported into eclipse. Since they reference each other, it is advisably to import them in the following order:

  1. core
  2. shell
  3. fs
  4. gui
  5. net
  6. builder
  7. distr
  8. all
  9. sound
  10. textui
  11. cli

For a more details please have a look at this Howto.

IntelliJ IDEA

JetBrains Inc has donated a Open Source License for Intellij IDEA to the dedicated developers working on JNode.

Developers can get a license by contacting Martin.
Setup of the sub-projects is done with using the modules feature like with Eclipse.

One should increase the max memory used in the bin/idea.exe.vmoptions or bin/ file, edit the -Xmx line to about 350mb. IntelliJ can be downloaded at Use at least version 5.1.1. Note that this version can import Eclipse projects.

Requirements for building under Windows

  1. Make sure that you have a Sun JDK for Java 1.6.0 at or near the most recent patch level. (Some older patch levels are known to cause obscure problems with JNode builds.)
  2. Make sure that the pathname for the root directory your JNode tree contains no spaces. (Spaces in the pathname are likely to break the build.)
  3. Create a "bin" directory to holds some utilities; see below.
  4. Use the "System" control panel to add the "bin" directory to your windows %PATH%.
  5. Download the "nasm" assembler from (Make sure that you get the Win32 version not the DOS32 version!)
  6. Open the downloaded ZIP file, and copy the "nasm.exe" file to your "bin" directory. Then rename it to "nasmw.exe".

Now, can start a Windows command prompt, change directory to the JNode root, and build JNode as explained the next section.

Requirements for building under Linux

  1. Make sure that you have a Sun JDK for Java 1.6.0 at or near the most recent patch level. (Some older patch levels are known to cause obscure problems with JNode builds.)
  2. Make sure that the 'nasm' assembler is installed. If not, use "System>Add/Remove Software" (or your system's equivalent) to install it.


Running "" or "build.bat" with no arguments to list the available build targets. Then choose the target that best matches your target environment / platform.

Alternatively, from within Eclipse, execute the "all" target of all/build.xml. Building in Eclipse is not advised for Eclipse version 2.x because of the amount of memory the build process takes. From Eclipse 3.x make sure to use Ant in an external process.

A JNode build will typically generate in the following files:

all/build/jnodedisk.pln A disk image for use in VMWare 3.0
all/build/x86/netboot/jnodesys.gz A bootable kernel image for use in Grub.
all/build/x86/netboot/full.jgz A initjar for use in Grub.

Some builds also generate an ISO image which you can burn to disk, and then use to boot into JNode from a CD / DVD drive.

IntelliJ Howto

This chapter explains how to use IntelliJ IDEA 4.5.4 with JNode. JetBrains Inc has donated a Open Source License to the dedicated developers working on JNode. The license can optained by contacting Martin.

New developers not yet on the JNode project can get a free 30-day trial license from JetBrains Inc.


JNode contains several modules within a single CVS module. To checkout and import these modules in IntelliJ, execute the following steps:

  1. Checkout the jnode module from CVS using IntelliJ's "File -> Check Out from CVS".

    Dedicated developer should use a Cvs root like ""

    Other should use Anonymous CVS Access and use Cvs root ":pserver:[email protected]:/cvsroot/jnode"

  2. Open the project with "File -> Open project" and select the folder that was choosen as destination in the CVS check out. In the "jnode" folder select the "JNode.ipr" file.

The rest has been setup in the project and you should now be able to start.


You can build JNode within IntelliJ by using the build.xml Ant file. In the right side of IntelliJ you find a "Ant Build" tab where the ant file is found. Run the "help" Target to get help on the build system.

Due to the memory requirements of the build process, it could be better to run the build from the commandline using build.bat (on windows) or (on unix).

Building on the Mac OSX / Intel platform

(These instructions were contributed by "jarrah".)

I've successfully built jnode on MacOS X from the trunk and the 2.6 sources. Here's what I needed to do:

  1. I'm using 10.5.2. I'm not sure if it works on 10.4.x.
  2. Download and install Java SE 6 from the ADC site. I used Developer Preview 9. Note that this only works on 64 bit compatible machines (MacBook Pro's with Core 2 Duo processors are OK). The link to the downloads page is:
  3. If you don't want to use Java SE 6 as your default Java (I don't), then edit and add /System/Library/Frameworks/JavaVM.framework/Versions/1.6/Commands/ before the java command.
  4. Download and build cdrtools (I used version 2.01.01) from I just installed the mkisofs executable in /usr/local/bin (which is in my path).
  5. Download and install yasm from I used version 0.6.2.
  6. Edit all/lib/jnode.xml and change the javac "memoryMaximumSize" attribute to "1024m".
  7. Edit core/src/openjdk/sun/sun/applet/ and comment out line 34 "import*;"
  8. Run "sh cd-x86-lite".

You should end up with an ISO image called jnode-x86.iso in all/build/cdroms.



Using OSX and PowerPC for JNode development and testing

Using OSX and PPC for JNode development and testing

What we want is:
1. CVS tool
2. IDE for development
3. A way to build JNode
4. A way to boot JNode for testing

First of all we need to install the XCode tools from apple. Usually it is shipped with your OSX, look in /Applications/Installers/. If it is not there you, you can download it from apple’s site.

1. CVS tool
Well cvs is already in the OSX installation. There are some GUI tools to make the use of cvs easier. SmartCVS is a good one, which you can use it in your windows/PC computer, or linux etc.

2. IDE
Eclipse. Eye-wink

3. How to build JNode with a ppc machine (not FOR, WITH ppc)
Good for us, JNode build process is based on apache ant, which as a java tool runs everywhere. The only problem is the native assembly parts of JNode. For them JNode build process uses nasm and yasm.

So the only thing we need is to build them for ppc and use them. They will still make x86 binaries as they are written to do.

First of all we have to get the nasm and yasm sources. The first one is on
and the other is on

After that we unzip them and start the compile.

Open a terminal window and go inside the directory with the nasm sources

Run ./configure to create the Makefile for nasm

If everything is ok you now are ready to compile nasm. Just run ‘’make nasm‘’. Maybe there will be a problem if you try to compile all the nasm tools by running ‘’make’’ (I had), but you dont need them. Nasm is enought.

Now copy nasm in your path. /usr/bin is a good place.

The same as for nasm open a terminal window and go to the directory with yasm sources.

Run ‘’./configure’’

Run ‘’make’’

Now you can either copy yasm to /usr/bin or run ‘’make install’’ which will install the yasm tools under /usr/local/bin.

That’s all with nasm and yasm. You are ready to build JNode. You may have problems using the script, but you can always run the build command manually ‘’java -Xmx512M -Xms128M -jar core/lib/ant-launcher.jar -lib core/lib/ -lib /usr/lib/java/lib -f all/build.xml cd-x86’’

4. Booting JNode
Well there is only one way to do that. Emulation.

There is VirtualPC for OSX, which is pretty good and fast. To use it just create a new virtual PC and start it. When the virtual PC is started right click on the CD-Rom icon at the bottom of the window (hmm I know there is no right click on macs Sticking out tongue I assume you know to press ctrl+click). Now tell the VirtualPC to use the JNode iso image as cdrom drive and boot from it. There you are!

I think there is also qemu for ppc. I have not ever used it, so I don’t know how you can configure it.

Source files & packages

This chapter explains the structure of the JNode source tree and the JNode package structure.

Directory structure

The JNode sources are divided into the following groups:

  • all
    Contains the global libraries, the (Ant) build files and some configuration files. This group does not contain java sources.
  • builder
    Contains the java source code used to build JNode. This includes several Ant tasks, but also code used to link Elf files and to write the JNode bootimage.
  • core
    Contains the JNode virtual machine code (both java and assembler), the classpath java library sources and the core of the JNode operating system, including the plugin manager, the driver framework, the resource manager and the security manager. This is by far the largest and most complex group.
  • distr
    Contains the first parts of the JNode distribution. This includes an installation program and various applications.
  • fs
    Contains the file system framework, the various file system implementation and the block drivers such as the IDE driver, harddisk driver, CD-ROM etc.
  • gui
    Contains the JNode gui implementation. This includes the graphics drivers, the AWT peer implementation, font renderers and the JNode desktop.
  • net
    Contains the JNode network layer. This includes the network drivers, the network framework, the TCP/IP stack and the connection between the network layer and the package.
  • shell
    Contains the JNode command shell and several system shell commands.
  • textui
    Contains a copy of the charva text based AWT implementation.
  • cli
    Contains the bulk of JNode's commands.

Every group is a directory below the root of the JNode CVS archive. Every group contains one or more standard directories.

  • build
    This directory is created during the build and contains the intermediate build results.
  • descriptors
    This directory contains the plugin descriptors of the plugins defined in this group.
  • lib
    This directory contains libraries (jar files) required only by this group. An exception is the All group, for which the lib directory contains libraries used by all groups.
  • src
    This directory contains the source files. Below this directory there are one or more source directories (source folders in Eclipse) containing the actual source trees.

JNode coding DOs and DONTs

This page lists some tips on how to write good JNode code.

Please add other tips as required.

Avoid using, System.out, System.err
Where possible, avoid using these three variables. The problem is that they are global to the current isolate, and are not necessarily connected to the place that you expect them to be.

In a user-level command should use streams provided by the Command API; e.g. by calling the 'getInput()' method from within the command's 'execute' method. Device drivers, services and so on that do not have access to these streams should use log4j logging.

Avoid cluttering up the console with obscure or unnecessary logging
If a message is important enough to be written to the console, it should be self explanatory. If it is unimportant, it should be logged using log4j at an appropriate level. We really do not need to see console messages left over from someone's attempts to debug something 12 months ago ...

Avoid using Unsafe.debug(...) methods
The org.jnode.vm.Unsafe.debug(...) methods write to the VGA screen and (when kernel debug is enabled) to the serial port. This is ugly, and should be reserved for important early boot sequence logging, VM and GC debugging, and other situations where log4j logging cannot be used.
Don't call 'Throwable.printStackTrace' and friends
Commands should allow unexpected exceptions to propagate to the shell level where they are handled according to the user's setting of the 'jnode.debug' property. (The alternative of adding a "--debug" flag to each command, is a bad idea. It is a lot of work and will tend to lead to inconsistencies of implementation; e.g. commands that don't implement "--debug", send "--debug" output to unexpected places, overload other functionality on the flag, etcetera.)

Services, etc should make appropriate log4j calls, passing the offending Throwable as an argument.

Do use JNode's Command and syntax.* APIs for commands
Commands that are implemented using the Command and syntax.* APIs support completion and help, and are more likely to behave "normally"; e.g. with respect to stream redirection. There are lots of examples in the codebase.

If the APIs don't do what you want, raise an issue. Bear in mind that some requests may be deemed to be "to hard", or to application specific.

Do include a 'main' entry point.
It is a good idea to include a legacy "public static void main(String[])" entry point in each JNode command. This is currently only used by the old "default" command invoker, but in the future it may be used to run the command in a classic JVM.
Don't use '\n\r' or '\r\n" in output.
The line separator on JNode is '\n', like on UNIX / Linux. If you expect your command to only run on JNode, it is not unreasonable to hardwire '\n' in output messages, etc. But if you want your command to be portable, it should use 'System.getProperty("line.separator")', or one of the PrintWriter / PrintStream's 'println' methods.

JNode Java style rules

All code that is developed as part of the JNode project must conform to the style set out in Sun's "Java Style Guidelines" (JSG) with variations and exceptions listed below. Javadocs are also important, so please try to make an effort to make them accurate and comprehensive.

Note that we use CheckStyle 4.4 as our arbiter for Java style correctness. Run "./ checkstyle" to check your code style before checking it in or submitting it as a patch. UPDATE: And also, please run "./ javadoc" to make sure that you haven't introduced any new javadoc warnings.

No TAB characters.
No TAB characters (HT or VT) are allowed in JNode Java source code.
Whitespace TABs should be replaced with the requisite number of spaces or newlines. Non-whitespace TABs (i.e. in Java strings) should be replaced with "\t" or "\f" escape sequences.
Use 4 space indentation.
Two or three characters is too little, eight is too much.
Maximum line width 120.
The JSG recommends 80 characters, but most people use tools that can cope with much wider.
Put following keywords on same line as }
For example:

    try {
        if (condition) {
        } else {
    } catch (Exception ex) {
        return 42;

Note that the else is on the same line as the preceding }, as is the catch.

Indent labels by -4.
For example:

public void loopy() {
    int i;
    for (i = 100; i < 1000000; i++) {
        if (isPrime(i) && isPrime(i + 3) && isPrime(i + 5) {
            break LOOP;
No empty { } blocks
A { } block with no code should contain a comment to say why the block is empty. For example,

    try {
    } catch (IOException ex) {
        // we can safely ignore this
No marker comments
It is generally accepted that marker comments (like the following) add little to the readability of a program. In fact, most programmers think they are an eyesore and a waste of space.

    // Start of private methods
Avoid copying javadoc
Instead of copying the javadoc from the parent class or method, use the {@inheritDoc} tag and if needed add some specific javadoc.

     * {@inheritDoc}
    public void myMethod(String param1, int param2) {
Give your references in the javadoc
When you are implementing a class from a reference document, add a link in the javadoc.


The java classes of JNode are organized using the following package structure.
Note that not all packages are listed, but only the most important. For a full list, refer to the javadoc documentation.

All packages start with org.jnode.

Common packages

  • org.jnode.boot
    Contains the first classes that run once JNode is booted. These classes initialize the virtual machine and start the operating system.
  • org.jnode.plugin
    Contains the interfaces of the plugin manager.
  • org.jnode.util
    Contains frequently used utility classes.
  • org.jnode.protocol
    Contains the protocol handlers for the various (URL) protocols implemented by JNode. Every protocol maps onto a package below this package, e.g. the plugin protocol handler is implemented in org.jnode.protocol.plugin.

JNode virtual machine

  • org.jnode.vm
    Contains the core classes of the JNode virtual machine.
  • org.jnode.vm.classmgr
    Contains the internal classes that represent java classes, methods & field. It also contains the classfile decoder.
  • org.jnode.vm.compiler
    Contains the base classes for the native code compilers that convert java bytecodes into native code for a specific platform.
  • org.jnode.vm.memmgr
    Contains the java heap manager, including the object allocator and the garbage collector.
  • org.jnode.vm.<arch>
    For every architecture that is supported by JNode a seperate package exists, that contains the architecture dependent classes, including classes for threads and processors and classes for the native code compilation.

JNode operating system

  • org.jnode.driver
    Contains the driver framework.
    All drivers and driver API's have a seperate package below this package. Drivers of a similar type are grouped, e.g. all video drivers have a package below

  • org.jnode.system
    Contains the interfaces for the various low level resources in the system, such as memory regions, I/O port regions, DMA access.
  • org.jnode.fs
    Contains the filesystem framework.
    All file systems have a seperate package below this package, e.g. the EXT2 filesystem implementation is contained in the org.jnode.fs.ext2 package and its sub-packages.
    Contains the network layer.
    All network protocols have a seperate package below this package, e.g. the IPv4 protocol and its sub-protocols is contained in the package and its sub-packages.
    Contains the command shell.
    All system commands are grouped in packages below this package.

Special packages

There are some packages that do not comply to the rule that all packages start with org.jnode. These are:

  • java.*, javax.*
    Contains the classpath implementation of the standard java libraries.

  • gnu.*
    Contains implementation classes of the the classpath library.
  • org.vmmagic.pragma
    Contains exception classes and interfaces that have special meaning to the virtual machine and especially the native code compilers. These classes are mostly shared with the Jikes RVM
  • org.vmmagic.unboxed
    Contains non-normal classes that are used as pointers to raw memory, object references and architecture dependent integers (words). These classes have a special meaning to the virtual machine and especially the native code compilers and should never be instantiated or used without a good knowledge of their meaning. These classes are mostly shared with the Jikes RVM

Source file requirements


All java source files must contain the standard JNode header found in <jnode>/all/template/header.txt.

Do not add extra information to the header, since this header is updated automatically, at which time these extra pieces of information are lots.

Add any extra information about the class to the classes javadoc comment. If you make significant contribution to a class, feel free to add yourself as an @author. However, adding a personal copyright notice is "bad form", and unnecessary from the standpoint of copyright law. (If you are not comfortable with this, please don't contribute code to the project.)


All Java source files and other text-based files in the JNode project must be US-ASCII encoded. This means that extended characters in Java code must be encoded in the '\uxxxx' form. Lines should end with an ASCII linefeed (LF) character, not CR LF or LF CR, and hard tab (HT) characters should not be used.

If there is a pressing need to break these rules in some configuration or regression data file, we can make an exception. However, it is advisable to highlight the use of "nasty" characters (e.g. as comments in the file) so that someone doesn't accidentally "fix" them.

Plugin framework

In JNode, all code, services and resources are packaged into plugins.

Each plugin has a descriptor that defines the packages it contains, the plugins it depends on, and any extensions. The plugin-descriptors are held in the descriptors/ directory of each subproject. During the build, once the subprojects have been compiled, the plugins are assembled based on the descriptors that are found.

Plugins are collectively packaged into an initjar. This jar file is passed on the command line to grub when booting JNode and defines what is available to JNode during boot (drivers and such), as well after boot (commands/applications).

-- JNode Plugins --

A JNode plugin is defined by an xml file, its descriptor, contained in the descriptors/ directory of the subproject it belongs too. Filesystem plugins are in fs/descriptors, shell plugins in shell/descriptors and so on.

The root node of a plugin descriptor is >plugin< which takes a few required arguments that give the id, name, version and license.

id : the plugin id. This is the name that other plugins will use for dependencies, and the plugin-list will 
     use to include the plugin in an initjar.
name : A short descriptive name of what the plugin is.
version : The version of the plugin. For non-jnode plugins, this should be the version of the software being
          included. For JNode plugins, use @[email protected]
license-name : the name of the license the code in the plugin defines. JNode uses lgpl
provider-name : The name of the project that provided the code, for jnode plugins.
class(optional) : If the plugin requires special handling when loading/unloading the plugin, it can define a
                  class here that extends org.jnode.plugin.Plugin, overriding the start() and stop() methods.

Under the <plugin> node are definitions for different parts of the plugin. Here you define what the plugin includes, what it depends on, and any extensions.

The <runtime> node defines what a plugin is to include in its jar-file.

  <library> name="foo.jar>
    <export name="foo.*">

This will export the classes that match foo.* in foo.jar to a jar file. This is how you would include classes from a non-jnode library into a plugin for use in jnode. To have a plugin include jnode-specific classes, the library name is of the form "jnode-.jar" and tells the plugin builder not to look in a jar file, but to pull the classes from the build/ directory of that jnode subproject.

To declare dependencies for a plugin, a list of <import> nodes under a <requires> node is required.

  <import plugin=""/>

Will add a dependency to the plugin for this plugin. The dependency does two things. When a plugin is included in a plugin-list, its dependencies must also be included, or the initjar builder will fail.

Each plugin has its own classloader. If commands or applications defined in a plugin are run, instead of using a classpath to find classes and jars, the plugin uses the dependencies to search for the proper classes. Every plugin class loader has access to the system plugins, its own plugin, and any plugins listed as dependencies. This means that no plugin needs to require a system plugin.

The last part of a plugin are the extensions. These are not specific to plugins, but rather to different parts of jnode that use the plugin. An extension is defined as :

<extension point="some.extension.point">

The content of an extension is defined by its point. Below is a brief list of extension points and where to find documentation on them.

Shell Extensions
    Used to define aliases for the alias manager in the shell.

    Used to define a syntax for command line arguments to an alias.

Core Extensions
    Used to define a syntax for command line arguments to an alias.

Core Extensions
    Used to define what permissions the plugin is granted.

-- Plugin List --

A plugin list is used to build an initjar and includes all the plugin jars that are defined in its list. The default plugin lists are in all/conf and these lists are read, and their initjars built by default. To change this behavior there are two options in that can be added to tell the build system where to look for custom plugin-lists, and also to turn off building the default plugins.

custom.plugin-list.dir = 
    Directory can be any directory. ${root.dir} can be used to prefix the path with the directory of your jnode build.
no.default.initjars = 1
    Set to 1 to disable building the default initjars

A plugin list has a very simple definition. The root node is <plugin-list> that takes a single name attribute that will be the name of the actual initjar. The list of plugins are defined by adding <plugin id="some.plugin"> entries. If a plugin is included that has dependencies, and those plugins are not in the list, the initjar builder will fail.

You can add entries into the initjar manifest file by adding a <manifest> node with a list of <attribute> nodes. Attributes have two arguments, key and value. At a minimum you will want the following manifest entries :

  <attribute key="Main-Class" value=""/>
  <attribute key="Main-Class-Arg" value="boot"/>

This tells jnode, when it finishes initializing, and loads the initjar, that it should run CommandShell.main() with a single argument "boot", so that it knows that this shell is the root shell.

There are many reasons to create your own initjar plugin-list. The most basic reason would be to reduce the overhead of building jnode. By turning off building the default initjars, and defining your own plugin-list for a custom initjar, you can reduce the rebuild time of jnode when making simple changes. It can also allow you to create new plugins and define them in a plugin-list without disturbing the default initjar plugin-lists.

For a basic starting point, the shell-plugin-list.xml creates an initjar that has the minimal plugins for loading jnode and starting a CommandShell. From there you can add plugins that you want, to add various features.

How to add a plugin to JNode

This page will describe how to add a java program to JNode as plugin, so that it can be called via its alias.

First of all you need to set up Eclipse (or your favorit IDE) as described in the readme, so that JNode builds without errors and you can use it (e.g. use JNode in VMWare).

There are different ways of extending JNode with a plugin.
A plugin can contain a class that extends Plugin and (or) normal java programs.
Every plugin is described by a descriptor.

For our example we will develop a plugin that contains a normal java program.

We need a name for our plugin : we will use sample, wich is also the packagename of our plugin.
It belongs to one of the JNodes subprojects in our case we will use the ordername sample in the shell subproject.

Every java-file for our plugin has to be in (or in subfolders):


(for me it is d:\jnode\shell\src\shell\org\jnode\shell\sample)

Now we will write a small wich will be one of our plugin programs.
Here is the source of the file :


public class HelloWorld{

        public static void main(String[] args){

            System.out.println(“HelloWorld – [email protected]“);



thats ok, but it will not be build until we create a descriptor and add our plugin to the JNode full-plugin-list.xml.

The plugin descriptor ( stored in the descriptors folder of the shell subproject) and looks like this :

<?xml version="1.0" encoding="UTF-8"? >

<!DOCTYPE plugin SYSTEM "jnode.dtd">

<plugin id=""

            name="Sample Plugin"





            <import plugin=""/>



            <library name="jnode-shell.jar">

                        <export name="*"/>



<extension point="">

            <alias name="HelloWorld" class=""/>



Now we need to add our Plugin to the JNode full-plugin-list.xml, this file is located in jnode\all\conf your entry should look like this :


            <plugin id="org.jnode.util"/>

            <plugin id="org.jnode.vm"/>

            <plugin id="org.jnode.vm.core"/>

            <plugin id=""/>


thats it, you can now build JNode and test your HelloWorld plugin by typing HelloWorld.

What we can do now is add „normal“ programs to JNode via its provided Pluginstructure.

Command Line Interface

Arguments - The Basics

In JNode's command line interface, the Argument types are the Command programmers main tool for interacting with the user. The Argument provides support to the syntax mechanism to accept parameters, or reject malformed parameters, issuing a useful error message. The argument also supplies completion support, allowing a command to provide specific completions on specific domains.


At the moment, Arguments are mostly grouped into the shell project under the package. For the time being they will remain here. There is an effort being made to 'untangle' the syntax/argument APIs so this designation is subject to change in the future.

New arguments that are created should be placed into the cli project under the org.jnode.command.argument package if their only use is by the commands under the cli project.

How it works

Every command that accepts an option will require Arguments to capture the options and their associated values. The syntax parser makes use of an argument 'label' to map a syntax node to a specific argument. The parser then asks the Argument to 'accept' the given value. The argument may reject the token if it doesn't not satisfy it's requirements, and provide a suitable error message as to why. If it accepts the token, then it will be captured by the argument for later use by it's command.

Arguments also provide the ability to 'complete' a partial token. In some situations completions are not possible or do not make sense, but in many situations completions can be very helpful and save on typing, reduce errors, and even provide a little help if there are alot of options. The more characters there are in the token, the narrower the list of completions becomes. If the argument supplies only a single completion, this completion will be filled in for the user. This is a very powerful capability that can be used to great effect!

Using arguments

Before writing a command, it is important to consult the various specifications that many commands may have. Once you have an idea of the arguments you will need for the command, and you have a syntax put together, you can begin by adding your arguments to the command.

Along with the label that was discussed earlier, commands also take a set of flags. A set of flags is supplied by the Argument class, but individual Argument types may also supply their own specific flags. At the end of this document will be a list of known flags and their purpose, but for now we will discuss the common Argument flags.

By default arguments are 'SINGLE'. This means that the argument may only contain one value. In order to change this, and allow the argument to capture multiple values, you must set the MULTIPLE flag.
By default arguments are 'OPTIONAL'. This means that if the argument is not populated with any values, it will not be considered an error. In order to have the parser fail if the argument is not populated with at least one value, set the MANDATORY flag.
These flags are not used by all arguments and may be used to alter the behavior of 'accept' and 'complete' depending on the argument, and their values. As an example, the FileArgument by default will accept most any string that denotes a legal file name. In order to force it to only accept tokens that denote an existing file on the file system than set the EXISTING flag. In order to force it to only accept tokens that denote a file that does not exist already, set the NONEXISTENT flag.

Most arguments have overloaded constructors that allow you to not set any flags. If no such constructor exists, then feel free to create one! Optionally, it is safe to provide '0'(zero) for the flags parameter to mean no flags.

Once you have created the arguments that your command will need, you need to 'register' the arguments. This needs to be done in the Command constructor. Each argument needs to be passed to the registerArguments(Argument...) method. Once this is done, your arguments are ready to be populated the syntax parser.

(Note: Arguments that have been registered, but do not have a matching syntax node with its label will not cause an error at runtime. But they do make trouble for the 'help' command. For this reason it is recommended to not register arguments that have not yet been mapped in the syntax.)

Using arguments

When your command enters at the execute() method, the arguments will be populated with any values that were capture from the command line. For the most part, you will only need to be concerned with three methods supplied by Argument.

public boolean isSet()
If the argument has accepted and captured a token, then this method will return true. Commands should always check this method before querying for the captured values. If you query an argument for its values when it has none, the behavior is undefined and the return value (or possible exception) is unspecified and subject to change without notice. (The one case where this is not totally true is when an argument has the MANDATORY flag, as in this case this will _always_ return true. Though it is still considered 'good practice' to check this method before querying for values)
public V getValue()
This method returns the single value of an argument that was registered as SINGLE. If the argument has the MULTIPLE flag set, this method should not be used as it will throw an exception if there is more than one value captured by the argument. If there are no values captured, this currently returns null, but as noted earlier, this may not always be the case, and should not be relied upon.
public V[] getValues()
This method returns the captured values as an array. Calling this method when there the SINGLE flag is set is perfectly acceptable. Though it is usually more convenient to use the getValue() method.

Thats about it for arguments. Simple huh? Arguments are designed to allow for rapid development of commands and as such provide a nice simple interface for using arguments 'out of the box' so to speak. But the real power of arguments are their ability to be extended and manipulated in many ways so as to provide a more feature filled command line interface.

Basic argument types

Here are a list of the more common argument types, along with a short description on their purpose, features and usage.

An argument that accepts an 'alias'. It provides completion against those aliases that have been registered via plugin descriptors, and may also include the bjorne-style aliases if the bjorne interpreter is in use. (I'm not sure if it currently does, if not it should!)
An argument that accepts a As used in an example above, this argument is affected by the EXISTING and NONEXISTENT flags of Argument. FileArgument also currently provides two of its own flags that may be set. ALLOW_DODGY_NAMES is used to override the 'accept' and 'complete' features to allow filenames that begin with a '-'(hyphen). Normally FileArgument would consider such a filename to be an error, and reject such a token. There is also the HYPHEN_IS_SPECIAL flag, which allows a single '-' to be accepted. The purpose for this is to allow '-' to exist amongst a list of files, denoting stardnard input or output should be used instead. This feature is subject to change (pending some 'better way' of handling this).
This is likely to be the most used argument of all. It is used to denote a option that has no associated argument. This argument does not actually capture a token, instead it holds a single value of true if it has been found. This means that you can use isSet() to map its value to a local/instance boolean value. In some cases, a command may wish to allow a flag to be used multiple times to add advanced meaning. The command can use getValues().length in such a case to determine the number of times it has been specified.
IntegerArgument / LongArgument / DecimalArgument(TODO)
Allows an integer value to be captured on the command line. These arguments do not provide very helpful completion, as their domain of completions is generally too large. Their main purpose is to parse valid integers, rejecting those that are malformed.
This is one of the most 'accepting' arguments, as it will accept any token that is given to it. It also provides no completion. If your command really needs an unbounded String, then this is the right argument to use. This argument should be extended for cases where you want to accept a string, but the domain of acceptable strings is limited, and you wish to reject those tokens not within that domain and also possibly provide completion for the argument.
Similar in some respects to FileArgument, the URLArgument accepts valid tokens that represent a URL. This argument also respects the EXISTING and NONEXISTENT flags. Completion for parts of a url (the scheme for example), may be able to complete, but actual URL completion of domain names and the like may be nearly impossible. It should also be noted that the EXISTING and NONEXISTENT flags will likely cause a DNS lookup to be performed,

Syntax - Defining Commands

The syntax of a command is the definition of options, symbols and arguments that are accepted by commands. Each command defines its own syntax, allowing customization of flags and parameters, as well as defining order. The syntax is constructed using several different mechanisms, which when combined, allow for a great deal of control in restricting what is acceptable in the command line for a given command.

How it works

When you define a new command, you must give define a syntax bundle within a syntax extension point. When the plugin is loaded, the syntax bundle is parsed from the descriptor and loaded into the syntax manager. When the bundle is needed, when completing or when preparing for execution, the bundle is retrieved. Because a syntax bundle is immutable, it can be cached completely, and used concurrently.

Also, the help system uses the syntax to create usage statements and to map short & long flags to the description from an argument.

The puzzle pieces

See this document page for a concise description of the various syntax elements.

When setting out to define the syntax for a command, it is helpful to layout the synopsis and options that the command will need. The synopsis of a command can be used to define separate modes of operation. The syntax block itself is an implied <alternatives>, which means if parsing one fails, the next will be tried. To give an example of how breaking down a command into multiple synopsis can be helpful, we'll setup the syntax for a hypothetical 'config' command that allows listing, setting and clearing of some system configurations.

First, our synopsis...

    Lists all known configuration options and their values
config -l 

And our syntax...

<syntax alias="config">
  <empty />
  <option argLabel="list" shortName="l">
    <option argLabel="set" shortName="s">
    <argument argLabel="value">
  <option argLabel="clear" shortName="c">

To be continued...

Utility classes

The cli project contains a few utility classes to make implementation of common features across multiple commands easier. Because it is recommended that these classes be used when possible, they are quite well documented, and provide fairly specific information on their behavior, and how to use them. A brief outline will be provided here, along with links to the actual javadoc page.


ADW is _the_ tool for doing recursive directory searches. It provides a Visitor pattern interface, with a set of specific callbacks for the implementor to use. It has many options for controlling what it returns, and with the right configuration, can be made to do very specific searching.


The walker is mainly controlled by FileFilter instances. Multiple filters can be supplied, providing an implied '&&' between each filter. If any of the filters reject the file, then the extending class will not be asked to handle the file. This can be used to create very precise searches by combining multiple boolean filters with specific filter types.

The walker also provides the ability to filter files and directories based on a depth. When the minimum depth is set, files and directories below a given level will not be handled. The directories that are passed to walk() are considered to be at level 0. Therefore setting a min-depth of 0 will not pass those directories to the callbacks. When the maximum depth is set, directories that are at the maximum depth level will not be recursed into. They will however still be passed to the callbacks, pending acceptance by the filter set. Therefore setting a value of 0 to the max level may return the initial directories supplied to walk(), but it will not recurse into them.

Note: Boolean filters are not yet implemented, but they are on the short list.

Extending the walker

Although you can extend the walker to a class of it's own, the recommended design pattern is to implement the walker as a non-static inner class, or an anonymous inner class. This design gives the implemented callbacks of the walker access to the inner structure of the command it's used in. When the walker runs it will pass accepted files and directories to the appropriate callback methods. The walker also has callbacks for specific events, including the beginning and end of a walk, as well as when a SecurityException is encountered when attempting to access a file or directory.

public abstract void handleFile(File)
Tells the implementing class that a regular file has been found and accepted.
public abstract void handleDir(File)
Tells the implementing class that a directory has been found and accepted.
public void handleSpecialFile(File)
Tells the implementing class that a file has been found that is neither a directory or a regular file.
protected void handleRestrictedFile(File)
Tells the implementing class that it has found a file that triggered a SecurityException. By default, this method throws an IOException. This will cause walking to completly halt, which is likely undesired, and so it is highly recommended to override this method to provide suitable error message, and optionally continue walking.

protected void handleStartingDir(File)
Tells the implementing class that it is about to start walking the file system from the given file. This is triggered before the file itself is actually resolved. So the caller has a chance to do some initialization, like possibly changing the current working directory to make a relative path resolve with a different prefix path.
protected void lastAction(boolean)
Tells the implementing class that walking has finished. If the walker stopped walking because it was requested to do so, then the boolean parameter will be true. Otherwise if the walker finished normally, it will be false.


Debugging code running on the JNode platform is no easy task. The platform currently has none of the nice debugging support that you normally find on a mature Java platform; no breakpoints, no looking at object fields, stack frames, etc.

Instead, we typically have to resort to sending messages to the system Logger, adding traceprint statements and calling 'dumpStack' and 'printStackTrace'. Here are some other pointers:

  • If you are debugging a JNode command, you may be able to use ShellEmu or TestHarness to run and debug JNode specific code in your development environment. (This may not work if your command makes use of JNode specific services.)
  • A lot of classes have no JNode platform dependencies, and can be debugged using JUnit tests running in your development sandbox.
  • If you are debugging low-level JNode code, you can use "Unsafe.debug(...)" calls and the (so called) Kernel debugger to get trace information without causing object allocation. This is particularly important when debugging the JNode memory management, etc where any object allocation could trigger a kernel panic.
  • Beware of the effects of adding debug code on JNode system performance, and on timing related bugs; e.g. race conditions.

There is also a simple debugger that can be used in textmode to display threads and their stacktraces. Press Alt-SysRq to enter the debugger and another Alt-SysRq to exit the debugger. Inside the debugger, press 'h' for usage information.

Note: the Alt-SysRq debugger isn't working at the moment: see this issue.

Kernel debugger

A very simple kernel debugger has been added to the JNode nano-kernel. This debugger is able to send all data outputted to the console (using Unsafe.debug) to another computer via a null-modem cable connected to COM1.

From the other computer you can give simple commands to the debugger, such as dump the processor thread queues and print the current thread.

The kernel debugger can be enabled by adding " kdb" to the grub kernel command line, or by activating it in JNode using a newly added command: "kdb".


Using "remoteout" to record console / logger output

The remoteout command allows you to send a copy of console output and logger output to a remote TCP or UDP receiver. This allows you to capture console output for bug reports, and in the cases where JNode is crashing.

Before you run the command, you need to set up a receiver application on the remote host to accept and record the output. More details (including a brief note on the JNode RemoteReceiver application) may be found in the remoteout command page. Please read the Bugs section as well!

Operating System

This part contains the technical documentation of the JNode Operating System.

Boot and startup

During the boot process of JNode, the kernel image is loaded by Grub and booted. After the bootstrapper code, we're running plain java code. The fist code executed is in org.jnode.boot.Main#vmMain() which initializes the JVM and starts the plugin system.

Driver framework

The basic device driver design involves 3 components:

  • Device: a representation of the actual hardware device
  • Driver: a software driver able to control a Device
  • DeviceAPI: a programming interface for a Device, usually implemented by the Driver.

There is a DeviceManager where all devices are registered. It delegates to DeviceToDriverMapper instances to find a suitable driver for a given device. Instances of this mapper interface use e.g. the PCI id of a device (in case of PCIDevice) to find a suitable driver. This is configurable via a configuration file.

For a device to operate there are the following resources available:

  • Hardware interrupts. A driver can register an IRQHandler which is called on its own (normal java-) thread. The native kernel signals a hardware interrupt by incrementing a counter for that interrupts, after which the thread scheduler dispatches such events to the correct threads of the IRQHandler's.
  • DMA channels. A driver can claim a DMA channel.
    This channel can be setup, enabled and disabled.
  • IO port access. An Unsafe class has native methods for this. A device must first claim a range of IO ports before it can gain access to it.
  • Memory access. A device can claim a range of the the memory addressspace. A MemoryResource is given to the device. The device can use the methods of the MemoryResource to actually access the memory.

Filesystem framework

The filesystem support in JNode is split up into a generic part and a filesystem specific part. The role of the generic part is:

  1. Keep track of all mounted filesystems.
  2. Map between path names are filesystem entries.
  3. Share filesystem entries between various threads/processes.

The role of the filesystem specific part is:

  1. Store and retrieve files.
  2. Store and retrieve directories.

We should be more specific about what a filesystem is. JNode makes a distinction the a FileSystemType and a FileSystem. A FileSystemType has a name, can detect filesystems of its own type on a device and can create FileSystem instances for a specific device (usually a disk). A FileSystem implements storing/retrieving files and directories.

To access files in JNode, use the regular classes in the package. They are connected to the JNode filesystem implementation. A direct connection to the filesystem implementation is not allowed.

FrameBuffer devices

This chapter details the FrameBuffer device design and the interfaces involved in the design.


All framebuffer devices must implement this API.


TODO write me.


TODO write me.


TODO write me.


TODO write me.

Network devices

This chapter details the design of network devices and describe the interfaces involved.


Every network device must implement this API.

The API contains methods to get the hardware address of the device, send data through the device and get/set protocol address information.

When a network deivce receives data, it must deliver that data to the NetworkLayerManager. The AbstractNetworkDriver class (which is usually the baseclass for all network drivers) contains a helper method (onReceive) for this purpose.

Network protocols

This chapter will detail the interfaces involved in the network protocol layer.


This interface must be implemented by all network protocol handlers.


This interface must be implemented by OSI transport layer protocols.


This interface must be implemented by OSI link layer protocols.


To register a network layer, the network layer class must be specified in an extension of the "" extension point.
This is usually done in the descriptor of the plugin that holds the network layer.

Architecture specifics

This chapter contains the specific technical operating system details about the various architectures that JNode is operating on.

X86 Architecture

The X86 architecture is targets the Intel IA32 architecture implemented by the Intel Pentium (and up) processors and the AMD Athlon/Duron (etc) processors.

Physical memory layout

This architecture uses a physical memory layout as given in the picture below.

X86 Memory Map


The new command line syntax mechanism

In the classical Java world, a command line application is launched by calling the "main" entry point method on a nominated class, passing the user's command arguments as an array of Strings. The command is responsible for working out which arguments represent options, which represent parameters and so on. While there are (non-Sun) libraries to help with this task (like the Java version of GNU getOpts), they are rather primitive.

In JNode, we take a more sophisticated approach to the issue of command arguments. A native JNode command specifies its formal arguments and command line syntax. The task of matching actual command line arguments is performed by JNode library classes. This approach offers a number of advantages over the classical Java approach:

  • The application programmer has less work to do.
  • The user sees more uniform command syntax.
  • Diagnostics for incorrect command arguments can be more uniform.

In addition, this approach allows us to do some things at the Shell level that are difficult with (for example) UNIX style shells.

  • The JNode shell does intelligent command line completion based on a command's declared syntax and argument types. For example, if the syntax requires a device name at the cursor position when the user hits TAB, the JNode shell will complete against the device namespace.
  • The JNode help command uses a command's declared syntax to produce accurate "usage" and parameter type descriptions. These can be augmented by descriptions embedded in the syntax, or in separate files.
  • In the new version of the JNode syntax mechanisms, command syntaxes are specified in XML separate from the Java source code. Users can tailor the command syntax, like UNIX aliases only better. This can be used to support portable scripting; e.g. Unix-like command syntaxes could be used with a POSIX shell compatible interpreter to run Unix shell scripts.

As the above suggests, there are two versions of JNode command syntax and associated mechanisms; i.e parsing, completion, help and so on. In the first version (the "old" mechanisms) the application class declares a static Argument object for each formal parameter, and creates a static "Help.Info" data structure containing Syntax objects that reference the Arguments. The command line parser and completer traverse the data structures, binding values to the Arguments.

The problems with the "old" mechanisms include:

  • Use of statics to hold the Argument and Help.Info objects makes JNode commands non-reentrant, leading to unpredictable results when a command is executed in two threads.
  • The Syntax, Argument and associated classes were never properly documented, making them hard to maintain and hard to use.
  • There were numerous bugs and implementation issues; e.g. Unix-style named options didn't work, completion didn't work properly with alternative syntaxes, and so on.
  • Command syntaxes could not be tailored, as described above.

The second version (the "new" mechanisms) are a ground-up redesign and reimplementation:

  • Argument objects are created by the command class constructor, and registered to form an ArgumentBundle. Thus, command syntax is not an impediment to making command classes re-entrant.
  • Syntax objects are created from XML that is defined in the command's plugin descriptor, and that can be overridden from the JNode shell using the "syntax" command.
  • The "new" Syntax classes are much richer than the "old" versions. Each Syntax class has a "prepare" method that emits a simple BNF-like grammar; i.e. the MuSyntax classes. This grammar is used by the MuParser which performs n-level backtracking, and supports "normal" and "completion" modes. (Completion mode parsing works by capturing completions at the appropriate point and then initialing backtracking to find other alternatives.)

A worked example: the Cat command.

(This example is based on material provided by gchii)

The cat command is a JNode file system command for the concatenation of files.
The alternative command line syntaxes for the command are as follows:

 cat -u | -urls <url> ... |
 cat <file> ... 

The simplest use of cat is to copy a file to standard output displaying the contents of a file; for example.

 cat d.txt

The following example displays a.txt, followed by b.txt and then c.txt.

 cat a.txt b.txt c.txt

The following example concatenates a.txt, b.txt and c.txt, writing the resulting file to d.txt.

 cat a.txt b.txt c.txt > d.txt

In fact, the > output redirection in the example above is performed by the command shell and interpreter, and the "> d.txt" arguments are removed before the command arguments are processed. As far the command class is concerned, this is equivalent to the previous example.

Finally, the following example displays the raw HTML for the JNode home page:
cat --urls http ://

Syntax specification
The syntax for the cat command is defined in fs/descriptors/org.jnode.fs.command.xml.

The relevant section of the document is as follows:

   39   <extension point="">
   40     <syntax alias="cat">
   41       <empty description="copy standard input to standard output"/>
   42       <sequence description="fetch and concatenate urls to standard output">
   43         <option argLabel="urls" shortName="u" longName="urls"/>
   44         <repeat minCount="1">
   45           <argument argLabel="url"/>
   46         </repeat>
   47       </sequence>
   48       <repeat minCount="1" description="concatenate files to standard output">
   49         <argument argLabel="file"/>
   50       </repeat>
   51     </syntax>

Line 39: "" is an extension point for command syntax.

Line 40: The syntax entity represents the entire syntax for a command. The alias attribute is required and associates a syntax with a command.

Line 41: When parsing a command line, the empty tag does not consume arguments. This is a description of the cat command.

Line 42: A sequence tag represents a group of options and arguments, and others.

Line 43: An option tag is a command line option, such as -u and --urls. Since -u and --urls are actually one and the same option, the argLable attribute identifies an option internally.

Line 44: An option might be used more than once on a command line. When minCount is one or more, an option is required.

Line 45: An argument tag consumes one command line argument.

Line 48: When minCount is 1, an option is required.

Line 49: An argument tag consumes one command line argument.

The cat command is implemented in The salient parts of the command's implementation are as follows.

   54     private final FileArgument ARG_FILE =
   55         new FileArgument("file", Argument.OPTIONAL | Argument.MULTIPLE,
   56                 "the files to be concatenated");

This declares a formal argument to capture JNode file/directory pathnames from the command line; see the specification of the The "Argument.OPTIONAL | Argument.MULTIPLE" parameter gives the argument flags. Argument.OPTIONAL means that this argument may be optional in the syntax. The Argument.MULTIPLE means that the argument may be repeated in the syntax. Finally, the "file" label matches the "file" attribute in the XML above at line 49.

   58     private final URLArgument ARG_URL =
   59         new URLArgument("url", Argument.OPTIONAL | Argument.MULTIPLE,
   60                 "the urls to be concatenated");

This declares a formal argument to capture URLs from the command line. This matches the "url" attribute in the XML above at line 45.

   62     private final FlagArgument FLAG_URLS =
   63         new FlagArgument("urls", Argument.OPTIONAL, "If set, arguments will be urls");

This declares a formal flag that matches the "urls" attribute in the XML above at line 43.

   67     public CatCommand() {
   68         super("Concatenate the contents of files, urls or standard input to standard output");
   69         registerArguments(ARG_FILE, ARG_URL, FLAG_URLS);
   70     }

The constructor for the CatCommand registers the three formal arguments, ARG_FILE, ARG_URL and FLAG_URLS. The registerArguments() method is implemented in It simply adds the formal arguments to the command's ArgumentBundle, making them available to the syntax mechanism.

   79     public void execute() throws IOException {
   80         this.err = getError().getPrintWriter();
   81         OutputStream out = getOutput().getOutputStream();
   82         File[] files = ARG_FILE.getValues();
   83         URL[] urls = ARG_URL.getValues();
   85         boolean ok = true;
   86         if (urls != null && urls.length > 0) {
   87             for (URL url : urls) {
  107         } else if (files != null && files.length > 0) {
  108             for (File file : files) {
  127         } else {
  128             process(getInput().getInputStream(), out);
  129         }
  130         out.flush();
  131         if (!ok) {
  132             exit(1);
  133         }
  134     }

The "execute" method is called after the syntax processing has occurred, and after the command argument values have been converted to the relevant Java types and bound to the formals. As the code above shows, the method uses a method on the formal argument to retrieve the actual values. Other methods implemented by AbstractCommand allow the "execute" to access the command's standard input, output and error streams as Stream objects or Reader/Writer objects, and to set the command's return code.

Note: ideally the syntax of the JNode cat command should include this alternative:

 cat ( ( -u | -urls <url> ) | <file> ) ...

or even this:

 cat ( <url> | <file> ) ...

allowing <file> and <url&gt arguments to be interspersed. The problem with the first alternative syntax above is that the Argument objects do not allow the syntax to capture the complete order of the interspersed <file> and <url> arguments. In order to support this, we would need to replace ARG_FILE and ARG_URL with a suitably defined ARG_FILE_OR_URL. The problem with the second alternative syntax above is some legal <url> values are also legal <file> values, and the syntax does not allow the user to control the disambiguation.

For more information, see also org.jnode.fs.command.xml - . -

Ideas for future Syntax enhancements

Here are some ideas for work to be done in this area:

  • Extend OptionSetSyntax to support "--" as meaning everything after here is not an option.
  • Make OptionSetSyntax smarter in its handling of repeated options. For example completing "cp --recursive " should not offer "--recursive" as a completion.
  • Improve "help", including improving the output, incorporating more descriptions from the syntax, in preference to descriptions from the Command class, and supporting multi-lingual descriptions. (In fact, we need to go a lot further ... including supporting full documentation complete with a way to specify markup and cross-references. But that's a different problem really.)
  • Extend the Argument APIs so that we can specify (for example) that a FileArgument should match an existing file, an existing directory, a path to an object that does not exist, etc. This potentially applies to all name arguments over dynamic namespaces.
  • Extend the Argument APIs to support expansion of patterns against the FS and other namespaces. This needs to be done in a way that allows the user, shell and command to control whether or not expansion occurs. We don't want commands to have to understand that there are patterns at all .... except in cases where the command needs to know (e.g. some flavours of rename command). And we also need to cater for shell languages (e.g. UNIX derived ones) where FS pattern expansion is clearly a shell responsibility.
  • Add support for command-specific Syntax classes; e.g. to support complex command syntaxes like UNIX style "expr" and "test" commands.
  • Add command syntax support for command-line interactive commands like old-school UNIX ftp and nslookup. (In JNode, we already have a tftp client that runs this way.)
  • Implement a compatibility library to allow JNode commands to be executed in the class Java world.

JNode Command and Syntax APIs

This page is an overview of the JNode APIs that are involved in the new syntax mechanisms. For more nitty-gritty details, please refer to the relevant javadocs.

  1. These APIs still change a bit from time to time. (But if your code is in the JNode code base, you won't need to deal with these changes.)
  2. The javadocs on the JNode website currently do not include the "shell" APIs.
    You can generate the javadocs in a JNode build sandbox by running "./ javadoc".

  3. If the javadocs are inadequate, please let us know via a JNode "bug" request.

Java package structure

The following classes mostly reside in the "" package. The exceptions are "Command" and "AbstractCommand" which live in "". (Similarly named classes in the "" and "" packages are part of the old-style syntax support.)

The JNode command shell (or more accurately, the command invokers) understand two entry points for launching classes as "commands". The first entry point is the "public static void main(String[])" entry point used by classic Java command line applications. When a command class has (just) a "main" method, the shell will launch it by calling the method, passing the command arguments. What happens next is up to the command class:

  • A non-JNode application will typically deal with the command arguments itself, or using some third party class like "gnu.getopt.GetOpt".
  • A JNode-aware application can also use the old-style syntax method directly, by calling a "Help.Info" object's "parse(String[])" method on the argument strings.

The preferred entry point for a JNode command class is the "Command.execute(CommandLine, InputStream, PrintStream, PrintStream)" method. On the face of it, this entry point offers a number of advantages over the "main" entry point:

  • The "execute" method provides command's IO streams explicitly, rather than relying on the "System.{in,out,err}" statics. (Those statics are problematic, unless you are using proclets or isolates.)
  • The "execute" method gives the application access to more information gleaned from the command line; e.g. the command name (alias) supplied by the user.

Unless you are using the "default" command invoker, a command class with an "execute" entry point will be invoked via that entry point, even it it also has a "main" entry point. What happens next is up to the command class:

  • The "execute" method may fetch the user's argument strings from the CommandLine object and do its own argument analysis.
  • If the command class is designed to use old-style syntax mechanisms, the "execute" method will typically call the "parse(String[])" method and proceed as described above.
  • If the command class is designed to use new-style syntax mechanisms, argument analysis will already have been done. This can only happen if the command class extends the AbstractCommand class; see below.


The AbstractCommand class is a base class for JNode-aware command classes. For command classes that do their own argument processing, or that use the old-stle syntax mechanisms, use of this class is optional. For commands that want to use the new-style syntax mechanisms, the command class must be a direct or indirect subclass of AbstractCommand.

The AbstractCommand class provides helper methods useful to all command class.

  • The "exit(int)" method can be called from the command thread terminate command execution with an return code. This is roughly equivalent to a classic Java application calling "System.exit(int)".
  • The "getInput()", "getOutput()", "getError()" and "getIO(int)" methods return "CommandIO" instances that can be used to get a command's "standard io" streams as
    Java Input/OutputStream or Reader/Writer objects.

The "getCommandLine" method returns a CommandLine instance that holds the command's command name and unparsed arguments.

But more importantly, the AbstractCommand class provides infrastructure that is key to the new-style syntax mechanism. Specifically, the AbstractCommand maintains an ArgumentBundle for each command instance. The ArgumentBundle is created when either of the following happens:

  1. The child class constructor chains the AbstractCommand(String) constructor. In this case an (initially) empty ArgumentBundle is created.
  2. The child class constructor calls the "registerArgument(Argument ...)" method. In this case, an ArgumentBundle is created (if necessary) and the arguments are added to it.

If it was created, the ArgumentBundle is populated with argument values before the "execute" method is called. The existence of an ArgumentBundle determines whether the shell uses old-style or new-style syntax, for command execution and completion. (Don't try to mix the two mechanisms: it is liable to lead to inconsistent command behavior.)

Finally, the AbstractCommand class provides an "execute(String[])" method. This is intended to provide a bridge between the "main" and "execute" entry points for situations where a JNode-aware command class has to be executed via the former entry point. The "main" method should be implemented as follows:

    public static void main(String[] args) throws Exception {
        new XxxClass().execute(args);

CommandIO and its implementation classes

The CommandIO interfaces and its implementation classes allow commands to obtain "standard io" streams without knowing whether the underlying data streams are byte or character oriented. This API also manages the creation of 'print' wrappers.

Argument and sub-classes

The Argument classes play a central place in the new syntax mechanism. As we have seen above, the a command class creates Argument instances to act as value holders for its formal arguments, and adds them to its ArgumentBundle. When the argument parser is invoked, traverses the command syntax and binds values to the Arguments in the bundle. When the command's "execute" entry point is called, the it can access the values bound to the Arguments.

The most important methods in the Argument API are as follows:

  • The "accept(Token)" method is called by the parser when it has a candidate token for the Argument. If the supplied Token is acceptable, the Argument uses "addValue(...)" to add the Token to its collection. If it is not acceptable, "SyntaxErrorException" is thrown.
  • The "doAccept(Token)" abstract method is called by "accept" after it has done the multiplicity checks. It is required to either return a non-null value, or throw an exception; typically SyntaxErrorException.
  • In completion mode, the parser calls the "complete(...)" method to get Argument specific completions for a partial argument. The "complete" method is supplied a CompletionInfo object, and should use it to record any completions.
  • The "isSet()", "getValue()" and "getValues()" methods are called by a command class to obtain the value of values bouond to an Argument.

The constructors for the descendent classes of Argument provide the following common parameters:

  • The "label" parameter provides a name for the Attribute that is used to bind the Argument to Syntax elements. It must be unique in the context of the command's ArgumentBundle.
  • The "flags" parameter specify the Argument's multiplicity; i.e how many values are allowed or required for the Argument. The allowed flag values are defined in the Argument class. A well-formed "flags" parameter consists of OPTIONAL or MANDATORY "or-ed" with SINGLE or MULTIPLE.
  • The "description" parameter gives a default description for the Argument that can be used in "help" messages.

The descendent classes of Argument correspond to different kinds of argument. For example:

  • StringArgument accepts any String value,
  • IntegerArgument accepts and (in some cases) completes an Integer value,
  • FileArgument accepts a pathname argument and completes it against paths for existing objects in the file system, and
  • DeviceArgument accepts a device name and completes it against the registered device names.

There are two abstract sub-classes of Argument:

  • EnumArgument accepts values for a given Java enum.
  • MappedArgument accepts values based on a String to value mapping supplied as a Java Map.

Please refer to the javadoc for an up-to-date list of the Argument classes.

Syntax and sub-classes

As we have seen above, Argument instances are used to specify the command class'es argument requirements. These Arguments correspond to nodes in one or more syntaxes for the command. These syntaxes are represented in memory by the Syntax classes.

A typical command class does not see Syntax objects. They are typically created by loading XML (as specified here), and are used by various components of the shell. As such, the APIs need not concern the application developer.


This class is largely internal, and a JNode application programmer doesn't need to access it directly. Its purpose is to act as the container for the new-style Argument instances that belong to a command class instance.

MuSyntax and sub-classes

The MuSyntax class and its subclasses represent the BNF-like syntax graphs that the command argument parser actually operate on. These graphs are created by the "prepare" method of new-style Syntax objects, in two stages. The first stage is to build a tree of MuSyntax objects, using symbolic references to represent cycles. The second stage is to traverse the tree, replacing the symbolic references with their referents.

There are currently 6 kinds of MuSyntax node:

  • MuSymbol - this denotes a symbol (keyword) in the syntax. When a MuSymbol is match, no argument capture takes place.
  • MuArgument - this denotes a placeholder for an Argument in the syntax. When a MuArgument is encountered, the corresponding Argument's "accept" method is called to see if the current token is acceptable. If it is, the token is bound to the Argument; otherwise the parser starts backtracking.
  • MuPreset - this is a variation on a MuArgument in which a "preset" token is passed to the Argument. Unlike MuArgument and MuSymbol, a MuPreset does not cause the parser to advance to the next token.
  • MuSequence - this denotes that a list of child MuSyntax nodes must be matches in a given sequence.
  • MuAlternation - this denotes that a list of child MuSyntax nodes must be tried one at a time in a given order.
  • MuBackReference - this denotes a reference to an ancestor node in the MuSyntax tree. These nodes are replaced with their referents before parsing takes place.


The MuParser class does the real work of command line parsing. The "parse" method takes input parameters that provide a MuSyntax graph, a TokenSource and some control parameters.

The parser maintains three stacks:

  • The "syntaxStack" holds the current "productions" waiting to be matched against the token stream.
  • The "choicePointStack" holds "choicePoint" objects that represent alternates that the parser hasn't tried yet. The choicepoints also record the state of the "syntaxStack" when the alternation was encountered, and top of the "argsModified" stack.
  • The "argsModified" stack keeps track of the Arguments that need to be "unbound" when the parser backtracks.

In normal parsing mode, the "parse" method matches tokens until either the parse is complete, or an error occurs. The parse is complete if the parser reaches the end of the token stream and discovers that the syntax stack is also empty. The "parse" method then returns, leaving the Arguments bound to the relevant source tokens. The error case occurs when a MuSyntax does not match the current token, or the parser reaches the end of the TokenSource when there are still unmached MuSyntaxes on the syntax stack. In this case, the parser backtracks to the last "choicepoint" and then resumes parsing with the next alternative. If no choicepoints are left, the parse fails.

In completion mode, the "parse" method behaves differently when it encounters the end of the TokenSource. The first thing it does is to attempt to capture a completion; e.g. by calling the current Argument's "complete(...)" method. Then itstarts backtracking to find more completions. As a result, a completion parse may do a lot more work than a normal parse.

The astute reader may be wondering what happens if the "MuParser.parse" method is applied to a pathological MuSyntax; e.g. one which loops for ever, or that requires exponential backtracking. The answer is that the "parse" method has a "stepLimit" parameter that places an upper limit on the number of main loop iterations that the parser will perform. This indirectly addresses the issue of space usage as well, though we could probably improve on this. (In theory, we could analyse the MuSyntax for common pathologies, but this would degrade parser performance for non-pathological MuSyntaxes. Besides, we are not (currently) allowing applications to supply MuSyntax graphs directly, so all we really need to do is ensure that the Syntax classes generate well-behaved MuSyntax graphs.)

Syntax and XML syntax specifications

As the parent page describes, the command syntax "picture" has two distinct parts. A command class registers Argument objects with the infrastructure to specify its formal command parameters. The concrete syntax for the command line is represented in memory by Syntax objects.

This page documents the syntactic constructs provided by the Syntax objects, and the XML syntax that provides the normal way of specifying a syntax.

You will notice that there can be a number of ways to build a command syntax from the constructs provided. This is redundancy is intentional.

The Syntax base class

The Syntax class is the abstract base class for all classes that represent high-level syntactic constructs in the "new" syntax mechanisms. A Syntax object has two (optional) attributes that are relevant to the process of specifying syntax:

  • The "label" attribute gives a name for the syntax node that will be used when the node is formatted; e.g. for "help" messages.
  • The "description" attribute gives a basic description for the syntax node.

These attributes are represented in an XML syntax element using optional XML attributes named "label" and "description" respectively.


An ArgumentSyntax captures one value for an Argument with a given argument label. Specifically, an ArgumentSyntax instance will cause the parser to consume one token, and to attempt to bind it to the Argument with the specified argument label in the current ArgumentBundle.

Note that many Arguments are very non-selective in the tokens that they will match. For example, while an IntegerArgument will accept "123" as valid, so will "FileArgument" and many other Argument classes. It is therefore important to take account the parser's handling of ambiguity when designing command syntaxes; see below.

Here are some ArgumentSyntax instances, as specified in XML:

    <argument argLabel="foo">
    <argument label="foo" description="this controls the command's fooing" argLabel="foo">

An EmptySyntax matches absolutely nothing. It is typically used when a command requires no arguments.

    <empty description="dir with no arguments lists the current directory">


An OptionSyntax also captures a value for an Argument, but it requires the value token to be preceded by a token that gives an option "name". The OptionSyntax class supports both short option names (e.g. "-f filename") and long option names (e.g. "--file filename"), depending on the constructor parameters.

    <option argLabel="filename" shortName="f">
    <option argLabel="filename" longName="file">
    <option argLabel="filename" shortName="f" longName="file">

If the Argument denoted by the "argLabel" is a FlagArgument, the OptionSyntax matches just an option name (short or long depending on the attributes).


A SymbolSyntax matches a single token from the command line without capturing any Argument value.

    <symbol symbol="subcommand1">


A VerbSyntax matches a single token from the command line, setting an associated Argument's value to "true".

    <verb symbol="subcommand1" argLabel="someArg">


A SequenceSyntax matches a list of child Syntaxes in the order specified.

    <sequence description="the input and output files">
        <argument argLabel="input"/>
        <argument argLabel="output"/>


An AlternativesSyntax matches one of a list of alternative child Syntaxes. The child syntaxes are tried one at a time in the order specified until one is found that matches the tokens.

    <alternatives description="specify an input or output file">
        <option shortName="i" argLabel="input"/>
        <option shortName="o" argLabel="output"/>


A RepeatSyntax matches a single child Syntax repeated a number of times. By default, any number of matches (including zero) will satisfy a RepeatSyntax. The number of required and allowed repetitions can be constrained using the "minCount" and "maxCount" attributes. The default behavior is to match lazily; i.e. to match as few instances of the child syntax as is possible. Setting the attribute eager="true" causes the powerset to match as many child instances as possible, within the constraints of the "minCount" and "maxCount" attributes.

    <repeat description="zero or more files">
        <argument argLabel="file"/>

    <repeat minCount="1" description="one or more files">
        <argument argLabel="file"/>

    <repeat maxCount="5" eager="true" 
              description="as many files as possible, up to 5">
        <argument argLabel="file"/>


An OptionalSyntax optionally matches a sequence of child Syntaxes; i.e. it matches nothing or the sequence. The default behavior is to match lazily; i.e. to try the "nothing" case first. Setting the attribute eager="true" causes the "nothing" case to be tried second.

    <optional description="nothing, or an input file and an output file">
        <argument argLabel="input"/>
        <argument argLabel="output"/>

    <optional eager="true"
                 description="an input file and an output file, or nothing">
        <argument argLabel="input"/>
        <argument argLabel="output"/>


A PowerSetSyntax takes a list of child Syntaxes and matches any number of each of them in any order or any interleaving. The default behavior is to match lazily; i.e. to match as few instances of the child syntax as is possible. Setting the attribute eager="true" causes the powerset to match as many child instances as possible.

    <powerSet description="any number of inputs and outputs">
        <option argLabel="input" shortName="i"/>
        <option argLabel="output" shortName="o"/>


An OptionSetSyntax is like a PowerSetSyntax with the restriction that the child syntaxes must all be OptionSyntax instances. But what OptionSetSyntax different is that it allows options for FlagArguments to be combined in the classic Unix idiom; i.e. "-a -b" can be written as "-ab".

    <optionSet description="flags and value options">
        <option argLabel="flagOne" shortName="1"/>
        <option argLabel="flagTwo" shortName="2"/>
        <option argLabel="argThree" shortName="3"/>

Assuming that the "flagOne" and "flagTwo" correspond to FlagArguments, and "argThree" corresponds to (say) a FileArgument, the above syntax will match any of the following: "-1 -2 -3 three", "-12 -3 three", "-1 -3 three -1", "-3 three" or even an empty argument list.

The <syntax ... > element

The outermost element of an XML Syntax specification is the <syntax> element. This element has a mandatory "alias" attribute which associates the syntax with an alias that is in force for the shell. The actual syntax is given by the <syntax> element's zero or more child elements. These must be XML elements representing Syntax sub-class instances, as described above. Conceptually, each of the child elements represents an alternative syntax for the command denoted by the alias.

Here are some examples of complete syntaxes:

    <syntax alias="cpuid">
        <empty description="output the computer's id">

    <syntax alias="dir">
        <empty description="list the current directory"/>
        <argument argLabel="directory" description="list the given directory"/>

Ambiguous Syntax specifications

If you have implemented a language grammar using a parser generator (like Yacc, Bison, AntLR and so on), we will recall how the parser generator could be very picky about your input grammar. For example, these tools will often complain about "shift-reduce" or "reduce-reduce" conflicts. This is a parser generator's way of saying that the grammar appears (to it) to be ambiguous.

The new-style command syntax parser takes a different approach. Basically, it does not care if a command syntax supports multiple interpretations of a command line. Instead, it uses a simple runtime strategy to resolve ambiguity: the first complete parse "wins".

Since the syntax mechanisms don't detect ambiguity, it is up to the syntax designer to be aware of the issue, and take it into account when designing the syntax. Here is an example:

        <argument argLabel="number">
        <argument argLabel="file">

Assuming that "number" refers to an IntegerArgument, and "file" refers to a FileArgument, the syntax above is actually ambiguous. For example, a parser could in theory bind "123" to the IntegerArgument or the FileArgument. In practice, the new-style command argument parser will pick the first alternative that gives a complete parse, and bind "123" to the IntegerArgument. If you (the syntax designer) don't want this (e.g. because you want the command to work for all legal filenames), you will need to use OptionSyntax or TokenSyntax or something else to allow the end user to force a particular interpretation.

SyntaxSpecLoader and friends

More about the Syntax base class.

If you are planning on defining new sub-classes of Syntax, the two key behavioral methods that must be implemented are as follows:

  • The "prepare" method is responsible for translating the syntax node into the MuSyntax graph that will be used by the parser. This will typically be produced by preparing any child syntaxes, and assembling them using the appropriate MuSyntax constructors. If the syntax entails recursion at the MuSyntax levels, this will initially be expressed using MuBackReferences. The recursion points will then transformed into graph cycles by calling "MuSyntax.resolveBackReferences()".
    Another technique that can be used is to introduce "synthetic" Argument nodes with special semantics. For example, the OptionSetSyntax uses a special Argument class to deal with combined short options; e.g. where "-a -b" is expressed as "-ab".

  • The "format" method renders the Syntax in a form that is suitable for "usage" messages.
  • The "toXML" method creates a "nanoxml.XMLElement" that expresses this Syntax node in XML form. It is used by the "SyntaxCommand" class when it dumps an syntax specification as text. It is important that "toXML" produces XML that is compatible with the SyntaxSpecLoader class; see below.

Using Command Line Completion (old syntax mechanism)

Note: this page describes the old syntax mechanism which is currently being phased out. Please refer to the parent menu for the pages on the new syntax mechanism.

The JNode Command Line Completion is one of the central aspects of the shell. JNode makes use of a sophisticated object model to declare command line arguments. This also provides for a standard way to extract a help-document that can be viewed by the user in different ways. Additionally, the very same object model can be used to access the arguments in a convenient manner, instead of doing the 133735th command line parsing implementation in computer history.

The following terms play an important role in this architecture:

  • Parameter
    A key that can be passed to a programm. Typically, this are command line switches like "-h" or "--help", or indicators for the type of the assigned argument.
  • Argument
    A value that can be passed to a program. This can be filenames, free texts, integers or whatever type of arguments the program needs.
  • Syntax
    A program can define multiple Syntaxes, which provide for structurally different tasks it can handle. A Syntax is defined as a collection of mandatory and optional Parameters.

A sample command

The command used in this document is a ZIP-like command. I will call it sip. It provides for a variety of different parameter types and syntaxes.

The sip command,in this example, will have the following syntaxes:

sip -c [-password <password>] [-r] <sipfile> [<file> ...]

sip -x [-password <password>] <sipfile> [<file> ...]

Named Parameters:

  • -c: compress directory contents to a sipfile
  • -x: extract a sipfile
  • -r: recurse through subdirectories


  • password: the password for the sipfile
  • sipfile: the sipfile to perform the operation on
  • file: if given, only includes the files in this list

Declaring Arguments and Parameters

Let's set some preconditions, which will be of importance in the following chapters.

  • we will put the command into the package
  • the actual java implementation of the packager are found in

Therefore, the first lines of out Command class look like this:






public class SipCommand implements Command{

After importing the necessary packages, let's dive into the declaration of the Arguments. This is almost necessarily the first step when you want to reuse arguments. Good practise is to always follow this pattern, so you don't have to completely rework the declaration sometime. In short, we will work above definition from bottom up.

You will note that all Arguments, Parameters and Syntaxes will be declared as static. This is needed because of the inner workings of the Command Line Completion, which has to have access to a static HELP_INFO field providing all necessary informations.

static StringArgument ARG_PASSWORD = new StringArgument("password", "the password for the sipfile");

static FileArgument ARG_SIPFILE = new FileArgument("sipfile", "the sipfile to perform the operation on");

static FileArgument ARG_FILE = new FileArgument("file", "if given, only includes the files in this list", Argument.MULTI);

Now we can declare the Parameters, beginning with the ones taking no Arguments.

Note: all Parameters are optional by default!

// Those two are mandatory, as we will define the two distinct syntaxes given above

static Parameter PARAM_COMPRESS = new Parameter(

"c", "compress directory contents to a sipfile", Parameter.MANDATORY);

static Parameter PARAM_EXTRACT = new Parameter(

"x", "extract a sipfile", Parameter.MANDATORY);

static Parameter PARAM_RECURSE = new Parameter(

"r", "recurse through subdirectories");

static Parameter PARAM_PASSWORD = new Parameter(

"password", "use a password to en-/decrypt the file", ARG_PASSWORD);

// here come our two anonymous Parameters used to pass the files

static Parameter PARAM_SIPFILE = new Parameter(


static Parameter PARAM_FILE = new Parameter(



There is something special about the second Syntax, the extract one. The command line completion for this one will fail, as it will try to suggest files that are in the current directory, not in the sipfile we want to extract from. We will need a special type of Argument to provide a convenient completion, along with an extra Parameter which uses it.

Test frameworks

Whenever you add some new functionality to JNode, please considering implementing some test code to exercise it.

Your options include:

  • implementing JUnit tests for exercising self-contained JNode-specific library classes,
  • implementing Mauve tests for JNode implementations of standard library classes,
  • implementing black-box command tests using the org.jnode.test.harness.* framework, or
  • implementing ad-hoc test classes.

We have a long term goal to be able to run all tests automatically on the new test server. New tests should be written with this in mind.

Black-box command tests with TestHarness


This page gives some guidelines for specifying "black-box tests" to be run using the TestHarness class; see Running black-box tests".

A typical black-box test runs a JNode command or script with specified inputs, and tests that its outputs match the outputs set down by the test specification. Examples set specification may be found in the "Shell" and "CLI" projects in the respective "src/test" tree; look for files named "*-tests.xml".

Syntax for test specifications

Let's start with a simple example. This test runs "ExprCommand" command class with the arguments "1 + 1", and checks that it writes "2" to standard output and sets the return code to "0".

<testSpec title="expr 1 + 1" command=""
             runMode="AS_ALIAS" rc="0">


  1. The odd indentation of the closing "output" tag is is not a typo. This element is specifying that the output should consists of a "2" followed by a newline. If the closing tag was indented, the test would "expect" a couple of extra space characters after the newline, and this would cause a spurious test failure.
  2. Any literal '<', '>' and '&' characters in the XML file must be suitably "escaped"; e.g. using XML character entities;

An "testSpec" element and its nested elements specifies a single test. The elements and attributes are as follows:

"title" (mandatory attribute)

gives a title for the test to identify it in test reports.

"command" (mandatory attribute)

gives the command name or class name to be used for the command / script.

"runMode" (optional attribute)

says whether the test involves running a command alias or class ("AS_ALIAS") in the same way as the JNode CommandShell would do, running a script ("AS_SCRIPT"), or executing a class via its 'main' entry point ("AS_CLASS"). The default for "runMode" is "AS_ALIAS".

"rc" (optional attribute)

gives the expected return code for the command. The default value is "0". Note that the return code cannot be checked when the "runMode" is "AS_CLASS".

"trapException" (optional attribute)

if present, this is the fully qualified classname of an exception. If the test throws this exception or a subtype, the exception will be trapped, and the harness will proceed to check the test's post-conditions.".

"arg" (optional repeated elements)

these elements gives the "command line" arguments for the command. If they are omitted, no arguments are passed.

"script" (conditional element)

if "runMode" is "AS_SCRIPT", this element should contain the text of the script to be executed. The first line should probably be "#!<interpreter-name>".

"input" (optional element)

this gives the character sequence that will be available as the input stream for the command. If this element is absent, the command will be given an empty input stream.

"output" (optional element)

this gives the expected standard output contents for the command. If this element is absent, nothing is expected to be written to standard output.

"error" (optional element)

this gives the expected standard error contents for the command. If this element is absent, nothing is expected to be written to standard error.

"file" (optional repeating element)

this gives an input or output file for the test, as described below.

Syntax for "file" elements

A "file" element specifies an input or output file for a test. The attributes and content
are as follows:

"name" (mandatory attribute)

gives the file name. This must be relative, and will be resolved relative to the test's temporary directory.

"input" (optional attribute)

if "true", the element's contents will be written to a file, then made available for the test or harness to read, If "false", the element's contents will be checked against the contents of the file after the test has run.

"directory" (optional attribute)

if "true", the file denotes a directory to be created or checked. In this case, the "file" element should have no content.

Script expansion

Before a script is executed, it is written to a temporary directory. Any @ sequence in the script will be replaced with the name of the directory where input files are created and where output files are expected to appear.

Syntax for test sets

While the test harness can handle XML files containing a single <testSpec> element, it is more convenient to assemble multiple tests into a test set. Here is a simple example:

<testSet title="expr tests"">
  <include setName="../somewhere/more-tests.xml"/>
  <testSpec title="expr 1 + 1" ...>
  <testSpec title="expr 2 * 2" ...>

The "include" element declares that the tests in another test set should be run as part of this one. If the "setName" is relative, it will be resolved relative to this testSet's parent directory. The "testSpec" elements specify tests that are part of this test set.

Plugin handling

As a general rule, JNode command classes amd aliases are defined in plugins. When the test harness is run, it needs to know which plugins need to be loaded or the equivalent if we are running on the development platform. This is done using "plugin" elements; for example:

  <plugin id=""
  <plugin id=""/>

These elements may be child elements of both "testSpec" or "testSet" elements. A given plugin may be specified in more than one place, though if a plugin is specified differently in different places, the results are undefined. If a "plugin" element is in a "testSpec", the plugin will be loaded before the test is run. If a "plugin" element is in a "testSet", the plugin will be loaded before any test in the set, as well as any test that are "included".

The "plugin" element has the following attributes:

"id" (mandatory attribute)

gives the identifier of the plugin to be loaded.

"version" optional attribute)

gives the version string for the plugin to be loaded. This defaults to JNode's default plugin version string.

"class" optional attribute)

gives the fully qualified class name for a "pseudo-plugin" class; see below.

When the test harness is run on JNode, a "plugin" element causes the relevant Plugin to be loaded via the JNode plugin manager, using the supplied plugin id and the supplied (or default) version string.

When the test harness is run outside of JNode, the Emu is used to provide a minimal set of services. Currently, this does not include a plugin manager, so JNode plugins cannot be loaded in the normal way. Instead, a "plugin" element triggers the following:

  1. The plugin descriptor file is located and read to extract any aliases and command syntaxes. This information is added to Emu's alias and syntax managers.
  2. If the "plugin" element includes a "class" attribute, the corresponding class is loaded and the default constructor is called. This provides a "hook" for doing some initialization that would normally be done by the real Plugin. For example, the "plugin" in the bjorne tests use the "BjornePseudoPlugin" class to registers the interpreter with the shell services. (This is normally done by "BjornePlugin".)

Virtual Machine

This part contains the technical documentation of the JNode virtual machine.


Arrays are allocated just like normal java objects. The number of elements of the array is stored as the first (int) instance variable of the object. The actual data is located just after this length field.

Bytecodes that work on arrays are do the index checking. E.g. on the X86 this is implemented using the bound instruction.

Classes & Objects

Each class is represented by an internal structure of class information, method information and field information. All this information is stored in normal java objects.

Every object is located somewhere in the heap. It consists of an object header and space for instance variables.


At the start of each method invocation a frame for that method is created on the stack. This frame contains references to the calling frame and contains a magic number that is used to differentiate between compiled code invocations and interpreted invocations.
When an exception is thrown, the exception table of the current method is inspected. When an exception handler is found, the calculation stack is cleaned and code executing continues at the handler address.
When no suitable exception handler is found in the current method, the stackframe of the current method is destroyed and the process continues at the calling method.

When stacktrace of an exception is created from the frames of each method invocation. A class called VmStackFrame has exactly the same layout as a frame on the stack and is used to enumerate all method-invcocations.

Garbage collection

JNode uses a simple mark&sweep collector. You can read on the differences, used terms and some general implementation details at wikipedia. In these terms JNode uses a non-moving stop-the-world conservative garbage collector.

About the JNode memory manager you should know the following: There is org.jnode.vm.memmgr.def.VmBootHeap. This class manages all objects that got allocated during the bootimage creation. VmDefaultHeap contains objects allocated during runtime. Each Object on the heap has a header that contains some extra information about the heap. There's information about the object's type, a reference to a monitor (if present) and the object's color (see wikipedia). JNode objects can have one of 4 different colors and one extra finalization bit. The values are defined in org.jnode.vm.classmgr.ObjectFlags.

At the beginning of a gc cycle all objects are either WHITE (i.e. not visited/newly allocated) or YELLOW (this object is awaiting finalization).

The main entry point for the gc is org.jnode.vm.memmgr.def.GCManager#gc() which triggers the gc run. As you can see one of the first things in gc() is a call to "helper.stopThreadsAtSafePoint();" which stops all threads except the garbace collector. The collection then is divided into 3 phases: markHeap, sweep and cleanup. The two optional verify calls at the beginning and end are used for debugging, to watch if the heap is consistent.

The mark phase now has to mark all reachable objects. For that JNode uses org.jnode.vm.memmgr.def.GCStack (the so called mark stack) using a breadth-first search (BFS) on the reference graph. At the beginning all roots get marked where roots are all references in static variables or any reference on any Thread's stack. Using the visitor pattern the Method org.jnode.vm.memmgr.def.GCMarkVisitor#visit get called for each object. If the object is BLACK (object was visited before and all its children got visited. Mind: This does not mean that the children's children got visited!) we simply return and continue with the next reference in the 'root set'. If the object is GREY (object got visited before, but not all children) or in the 'root set' the object gets pushed on the mark stack and mark() gets called.
Let's make another step down and examine the mark() method. It first pops an Object of the mark stack and trys to get the object type. For all children (either references in Object arrays or fields of Objects) the processChild gets called and each WHITE (not visited yet) object gets modified to be GREY. After that the object gets pushed on the mark stack. It is important to understand at that point that the mark stack might overflow! If that happens the mark stack simply discards the object to push and remembers the overflow. Back at the mark method we know one thing for sure: All children of the current object are marked GREY (or even BLACK from a previous mark()) and this is even true if the mark stack had an overflow. After examining the object's Monitor and TIB it can be turned BLACK.
Back at GCManager#markHeap() we're either finished with marking the object or the mark stack had an overflow. In the case it had an overflow we have to repeat the mark phase. Since many objects now are allready BLACK it is less likly the stack will overflow again but there's one important point to consider: All roots got marked BLACK but as said above not all children's children need to be BLACK and might be GREY or even WHITE. That's why we have to walk all heaps too in the second iteration.
At the end of the mark phase all objects are either BLACK (reachable) or WHITE (not reachable) so the WHITE ones can be removed.

The sweep again walks the heap (this time without the 'root set' as they do not contain garbage by definition) and again visits each object via org.jnode.vm.memmgr.def.GCSweepVisitor#visit. As WHITE objects are not reachable anymore it first tests if the object got previously finalized. If it was it will be freed, if not and the object has a Finalizer it will be marked YELLOW (Awaiting finalization) else it will be freed too. If the object is neither WHITE nor YELLOW it will be marked WHITE for the next gc cycle.

The cleanup phase at the end sets all objects in the bootHeap to WHITE (as they will not be swept above) as they might be BLACK and afterwards calls defragment() for every heap.

Some other thoughts regarding the JNode implementation include:

It should be also noted that JNode does not know about the stack's details. I.e. if the mark phase visits all objects of a Thread's stack it never knows for a value if it is a reference or a simple int,float,.. value. This is why the JNode garbage collector can be called conservative. Every value on the stack might be a reference pointing to a valid object. So even if it is a float on the stack, as we don't know for sure we have to visit the object and run a mark() cycle. This means on the one hand that we might mark memory as reachable that in reality is garbage on the other hand it means that we might point to YELLOW objects from the stack. As YELLOW objects are awaiting finalization (and except the case the finalizer will reactivate the object) they are garbage and so they can not be in the 'root set' (except the case where we have a random value on the stack that we incorrectly consider to be a reference). This is also the reason for the current "workaround" in GCMarkVisitor#visit() where YELLOW objects in the 'root set' trigger error messages instead of killing JNode.

There is some primilary code for WriteBarrier support in JNode. This is a start to make the gc concurrent. If the WriteBarrier is enabled during build time, the JNode JIT will include some special code into the compiled native code. For each bytecode that sets a reference to any field or local the writebarrier gets called and the object gets marked GREY. So the gc will know that the heap changed during mark. It is very tricky to do all that with proper synchronization and the current code still has bugs, which is the reason why it's not activated yet.

Java Security

This chapter covers the Java security implemented in JNode. This involves the security manager, access controller and privileged actions.
It does not involve user management.

The Java security in JNode is an implementation of the standard Java security API. This means that permissions are checked against an AccessControlContext which contains ProtectionDomain's. See the Security Architecture for more information.

In JNode the security manager is always on. This ensures that permissions are always checked.
The security manager (or better the AccessController) executes the security policy implemented by JNodePolicy. This policy is an implementation of the standard class.
This policy contains some static permissions (mainly for access to certain system properties) and handles dynamic (plugin) permissions.

The dynamic permissions are plugin based. Every plugin may request certain permissions. The Policy implementation decides if these permissions are granted to the plugin.

To request permissions for a plugin, add an extension to the plugin-descriptor on connected to the "" extension-point.
This extension has the following structure:

<permission class="..." name="..." actions="..."/>

class The full classname of the permission. e.g. "java.util.PropertyPermission"
name The name of the permission. This attribute is permission class dependent. e.g. ""
actions The actions of the permission. This attribute is permission class dependent. e.g. "read"

Multiple permission's can be added to a single extension.

If you need specific permissions, make sure to run that code in a PrivilegedAction. Besides you're own actions, the following standard PrivilegedAction's are available: Wraps System.getProperty Wraps Integer.getInteger Wraps Boolean.getBoolean Wraps Policy.getPolicy Wraps Method.invoke


Multithreading in JNode involves the scheduling of multiple java.lang.Thread instances between 1 or more physical processors. (In reality, multiprocessor support is not yet stable). The current implementation uses the yieldpoint scheduling model as described below.

Yieldpoint scheduling

Yieldpoint scheduling means that every thread checks at certain points (called "yieldpoints") in the native code to see if it should let other threads run. The native code compiler adds yieldpoints into the native code stream at the beginning and end of a method, at backward jumps, and at method invocations. The yieldpoint code checks to see if the "yield" flag has been set for the current thread, and if is has, it issues a yield (software-)interrupt. The kernel takes over and schedules a new thread.

The "yield" flag can be set by a timer interrupt, or by the (kernel) software itself, e.g. to perform an explicit yield or in case of locking synchronization methods.

The scheduler invoked by the (native code) kernel is implemented in the VmProcessor class. This class (one instance for every processor) contains a list of threads ready to run, a list of sleeping threads and a current thread. On a reschedule, the current thread is appended to the end of the ready to run thread-list. Then the sleep list is inspected first for threads that should wake-up. These threads are added to the ready to run thread-list. After that the first thread in the ready to run thread-list is removed and used as current thread. The reschedule method returns and the (native code) kernel does the actual thread switching.

The scheduler itself runs in the context of the kernel and should not be interrupted. A special flag is set to prevent yieldpoints in the scheduler methods themselves from triggering reentrant yieldpoint interrupts. The flag is only cleared when the reschedule is complete

Why use yieldpoint scheduling?

JNode uses yield point scheduling to simplify the implementation of the garbage collector and to reduce the space needed to hold GC descriptors.

When the JNode garbage collector runs, it needs to find all "live" object references so that it can work out which objects are not garbage. A bit later, it needs to update any references for objects that have been moved in memory. Most object references live either in other objects in the heap, or in local variables and parameters held on one of the thread stacks. However, when a thread is interrupted, the contents of the hardware registers are saved in a "register save" area, an this may include object references.

The garbage collector is able to find these reference because the native compiler creates descriptors giving the offsets of references. For each class, there is a descriptor giving the offsets of its reference attributes and statics in their respective frames. For each method or constructor, another descriptor gives the corresponding stack frame layout. But we still have to deal with the saved registers.

If we allowed a JNode thread to be interrupted at any point, the native compiler would need to create descriptors all possible saved register sets. In theory, we might need a different descriptor corresponding to every bytecode. By using yield points, we can guarantee that "yields" only occur at a fixed places, thereby reducing the number of descriptors that that need to be kept.

However, the obvious downside of yieldpoints is the performance penalty of repeatedly testing the "yield" flag, especially when executing a tight loop.

Thread priorities

Thread can have different priorities, ranging from Thread.LOW_PRIORITY to Thread.HIGH_PRIORITY. In JNode these priorities are implemented via the ready to run thread-list. This list is (almost) always sorted on priority, which means that the threads with the highest priority comes first.

There is one exception on this rule, which is in the case of busy-waiting in the synchronization system. Claiming access to a monitor (internals) involves a busy-waiting loop with an explicit yield. This yield ignores the thread priority to avoid starvation of lower-priority threads, which will lead to an endless waiting time for the high priority thread.

Classes involved

The following classes are involved in the scheduling system. All of these classes are in the org.jnode.vm package.

  • VmProcessor
  • VmThread contains the internal (JNode specific) data for a single thread. This class is extended for each specific platform

Native code compilation

All methods are compiled before being executed. At first, the method is "compiled" to a stub that calls the most basic compiler and then invokes the compiled code.

Better compilers are invoked when the VM detects that a method is invoked often. These compilers perform more optimizations.

Intel X86 compilers

JNode has now two different native code compilers for the Intel X86 platform and 1 stub compiler.

STUB is a stub compiler that generates a stub for each method that invokes the L1 compiler for a method and then invokes the generated code itself. This compiler ensures that method are compiled before being executed, but avoids compilation time when the method is not invoked at all.

L1A is a basic compiler that translated java bytecode directly to decent X86 instructions. This compiler uses register allocation and a virtual stack to eliminate much of the stack operations. The focus of this compiler is on fast compilation and reasonably fast generated code.

L2 is an optimizing compiler that focuses on generating very fast code, not on compilation speed. This compiler is currently under construction.

All X86 compilers can be found below the org.jnode.vm.x86.compiler package.

IR representation

Optimizing compilers use an intermediate representation instead of java bytecodes. The intermediate representation (IR) is an abstract representation of machine operations which are eventually mapped to machine instructions for a particular processor. Many optimizations can be performed without concern for machine details, so the IR is a good start. Additional machine dependent optimizations can be performed at a later stage. In general, the most important optimizations are machine independent, whereas machine dependent optimizations will typically yield lesser gains in performance.

The IR is typically represented as set of multiple operand operations, usually called triples or quads in the literature. The L2 compiler defines an abstract class to describe an abstract operation. Many concrete implementations are defined, such as BinaryQuad, which represents binary operations, such as a = x + y. Note that the left hand side (lhs) of the operation is also part of the quad.

A set of Quads representing bytecodes for a given method are preprared by

L2 Compiler Phases

The L2 compiler operates in four phases:

1. Generate intermediate representation (IR)
2. Perform second pass optimizations (pass2)
3. Register allocation
4. Generate native code

The first phase parses bytecodes and generates a set of Quads. This phase also performs simple optimizations, such as copy propagation and constant folding.

Pass2 simplifies operands and tries to eliminate dead code.

Register allocation is an attempt to assign live variable ranges to available machine registers. As register access is significantly faster than memory access, register allocation is an important optimization technique. In general, it is not always possible to assign all live variable ranges to machine registers. Variables that cannot be allocated to registers are said to be 'spilled' and must reside in memory.

Code is generated by iterating over the set of IR quads and producing machine instructions.

Object allocation

All new statements used to allocate new objects are forwarded to a HeapManager. This class allocates & initializes the object. The objects are allocated from one of several heaps. Each heap contains objects of various sizes. Allocation is currently as simple as finding the next free space that is large enough to fit all instance variables of the new object and claiming it.

An object is blanked on allocation, so all instance variables are initialized to their default (null) values. Finally the object header is initialized, and the object is returned.

To directly manipulate memory at a given address, a class called Unsafe is used. This class contains native methods to get/set the various java types.


Synchronization involves the implementation of synchronized methods and blocks and the wait, notify, notifyAll method of java.lang.Object.

Both items are implemented using the classes Monitor and MonitorManager.

Lightweight locks

JNode implement a lightweight locking mechanism for synchronized methods and blocks. For this purpose a lockword is added to the header of each object. Depending on the state of the object on which a thread wants to synchronize a different route it taken.

This is in principle how the various states are handled.

  1. The object is not locked: the lockword it set to a merge of the id of this thread and a lockcount of '1'.
  2. The object is locked by this thread: the lockcount part of the lockword is incremented.
  3. The object is locked by another thread: an inflated lock is installed for the object and this thread is added to the waiting list of the inflated lock.

All manipulation of the lockword is performed using atomic instructions prefixed with multiprocessor LOCK flags.

When the lockcount part of the lockword is full, an inflated lock is also installed.

Once an object has an inflated lock installed, this inflated lock will always be used.

Wait, notify

Wait and notify(all) requires that the current thread is owner of the object on which wait/notify are invoked. The wait/notify implementation will install an inflated lock on the object if the object does not already have an inflated lock installed.


The following reports are generated nightly reflecting the state of SVN trunk.



Nightly build: