services/drivers/plugins managers..don't we foget the most important thing in all this?

I think that the discussion about service integrations in JNode and other SciFi things is exactly what we need: innovation. We need to innovate in order to successfully bring a new OS into the market which now it's full of Linux/windows clones. My point when I started the discussion was that the computer should be the one giving options to the user and not the other way around. All your post are very interesting and brings us one step closer to achieving this, but I think we missed the most important thing from our discussion and that is THE USER INTERACTION WITH THE SYSTEM. A smart system to find/install/upgrade services is the kernel of such a system but we must not forget that the hard part or the real innovation must come when we need to implement the user interaction with such a system. The question is how will the user interact with the services in such a way that he must not know what is locally installed... he must know only what he wants. The current operating system are based on the fact that the user knows the system..know what is installed and where..knows how to solve the problems that he has. I think this approach is a mistake and JNode should come with new and innovative way when it comes to user interactions with it's system. The user must not know the system but the system should know itself. The user should describe whet he wants and the system should find a way to do the work requested. Here I see 2 big issues. First is how can a service describe to the OS what it can do so that the system can keep a database with all the possible actions. The second problem is how will the user formulate his needs. If I(big StarTrek fan) would be needed to chose the solution for user interaction I would say that the user should be able to describe its need using plain English language(or other human natural language). OK..don't flame me...I know that this is just star trek but I actually think that a mixture between natural language and OS assistance could do the trick. I think that all services should describe the objects that they work with and the actions they support on those objects, than the user would describe his actions by naming the object type he wants to work with. Actually German language uses this system and I must say...with success. In German language you would say "document edit" and not "edit document" or "radio listen" and not "listen to radio". What if the user would first type in a box the subject of it's actions and than the OS would present him with a list of actions that the system has registered. after choosing the action of his desire he will provide the parameters needed to complete the action. For example: the user types in the first input box "folder" than the OS will fill a combobox with the possible actions "create;delete:move;copy;upload;change attributes" the user selects he action and than another editing box will appear asking the user to provide the parameters needed to complete the action. Or : the user types in the first edit box "document" -> the OS files the actions combobox with the registered actions "create;edit;read only;publish;spell-check" -> the user chooses the action -> the OS is asking for the parameters needed to do the action. I think this kind of user interaction would be close to what the users expect. choosing such a solution would mean that the GUI must not be specified by the service. The service should just specify the parameter type that he needs. for example if the user is typing in action editobox "email" than the OS should search all services that have as main subject "email" and compile a list of possible actions and present them to the user. If the user chooses the action "write new" than the OS should investigate the parameters needed for such an actions by asking the service registered for such an action. The service should not ask the parameters by presenting a gui. The service should just specify the parameters needed and their type. returning to the email example the service for "email"->"write new" will tell the OS that it needs one or more contacts for recipient, one contact for sender , an optional text for subject, an optional text for body and an optional attachment. The OS having this list can than create the right GUI for the user. in our email example the OS will find that the "email" service for "write new" action needs a he will search for another service than has as main subject "contact" and use that service to build a GUI for the user to search and select 1 or more contacts. after the user using the "contact" service has selected the recipient he will continue filling the parameters for "write new" action but of course assisted by the OS. This is my dream!...a GUI which is standard and it is compiled based on the service/user needs... Add a voice recognition system to this user interaction system and you have the future in user interaction systems.. One big advantage of such a system that compiles the GUI dynamically is the portability of the services..same services could be installed on a mobile phone or a PC. only the GUI service will be different.. Right now I have only a dream and may be the world is not ready for such a system but I must expose my

User to UI

I believe whole heartedly in the concepts stated above. Our plans to have plugins install dynamicaly and automatically require that we create an abstract UI that does not require the user to know what is installed. The above suggestion that we use a natural human language it a good foundation. In this disscussion are found some problems with that idea as a whole.

One of the problems I have found with it, is the existance of a local database of possible actions to invoke against an object. This db either would not reflect the development of new actions, or would have to synchronize with an external source that would.

This is my suggestion, only an ammendment to the above idea. The local system could maintain a list of commands mapped to actions. The commands would be Strings of any form. The language of the Strings could be that of the user or just symbles the user made up. '(>?' The commands would take the form of plugin URI's which will be actionable with the plugin manager. The URI's specify what class to use and what args to supply (String[] for main).

This is then a list of the default actions for a set of commands.

It was suggested above that a text field be the start of user interaction. I will use that idea. The user would type in that field a command that he would use. The system both looks for a local match in the command/action map, and sends a multicast signal to other systems. Recieving systems would then respond with their default action for that command, if there is any. These resoponces would be collated (graphically for intrest) in order of the number of votes from remote systems (how many agreeing responces it got).

These actions, and the local default action would be show to the user graphically (after some formating). The user could then base his desicion either on the popularity of the plugin (number of votes) or specifically by the name of the plugin. What ever he chooses would become the new defalt actoin for that command.

In this way, logical commands stated in natural languages, would be easily found. Most people would say: Edit document or Redacta documento or Dokument redigieren. But also the user could create 'hot keys' for programs that he uses often.

What do you think?


Even more abstract

If I correctly understood your approach you want to map textual commands to actions at runtime and also create the gui dynamically using these actions. I hope I understood it, at least the main characteristics.

Maybe we can make it even more abstract. As Alex already pointed out in his comment, the command to action mappings will have to be localized. So what is the solution to this? Simply Eye-wink make the system understand the commands, so it can translate them between different languages.

Ok this makes it even more difficult to implement. But I think this is what we really want. Also the system must be able to understand the commands to be able to incorporate context. I.e. if I'm reading an email from Susan (she's asking whether I will attend her birthday party) and two hours later I tell the system 'Tell Susan I will attend her birthday party', the system will use the context (birthday party) to figure out to which of the three Susans I know to send the email reply to. Of course this is also some kind of presumption and hence errors raised by misinterpretations might arise, just as happens to humans.

Of course I didn't show a way to 'know' that both statements 'Please wash my car.' and 'My auto needs a wash.' will lead the speaker or hearer to wash the car. I simply don't know how to do it, yet.

Another issue is, how to translate a natural-language statement like 'Tell Susan that I will attend her birthday party.' into bytecodes / actions. I.e. I imagine some graph-based system, that will analyze the sentence and see, that 'tell ... that ...' is an action (e.g. send an email), 'Susan' is a person acting as the recipient of the email, which is a parameter to the 'tell ... that ...' action. 'I will attend her birthday party' is the contents of the message (i.e. the second parameter of the 'tell ... that ...' action.

I think it might be possible to create a layered model of computer instructions, whereas each layer adds more abstraction / reliance on context. It will be possible to transform elements from layers to elements of higher (abstraction) or lower (compilation) layers. This is also necessary, because lower layer actions / commands will be more efficient to process.
Layer 2: Natural language commands
Layer 1: Action classes
Layer 0: Java Bytecode / Machine code

If you don't see what I mean, please don't mind about my state of mind Eye-wink. I just wanted to check whether anyone else is thinking about something similar. I will try to do some experiments to clarify this stuff to myself and maybe I will then be able to explain it more conceptional.




I have had this dream before. Your sollution is very interesting; and not disimilar to one that I had. Most of your idea is better than mine, but some I need to question. Forgive me if my logic doesn't flow smoothly.

I like the idea of texturally requesting a service by function. The main reason is because it could work over the peer network I am developing. If a system got a strange request like 'Document Convert To Spanish' The system would search for it locally but probably would not find it. Then the system could send a multicast datagram with the requst in it. Other systems would recive the request and try to fill it. Any one that could fill it would respond with the class name and version of the class that matches the request. The originating system would then lookup that classname and version, which results in the download and install of the service. Then the request would be tried again, and the sollution would be found.

Unfortunatly there seems to be a major problem with the text protocal. Internationalization. Would every service be required to describe itself in every language? Or should we do like java has and impose the text protocal as engish only (with German gramer:)?

Another difference between your plan and mine is that I would have let the services controle the gui. Your plan is of course much more interesting. Do you think it is really possible? Specifying parramiters could be done by the constructors. An email app would need a URI[] for to, a URI for from, String subject, String body, and byte[][] attachments. The byte[][] could be a File[] but I would like to get away from the traditional idea of a file (ask me some time Smiling

Any way would the GUI system be able to generate the apropriat fields, place them in a layout that wasent misshapen or unusable, and validate the input (turn a String into a URI)?

I would love to see this work becaue it would add a consistent and unique look to jnode. Another good thing about this is that it abstracts the view from the model. Add to this a reliable persistence mechanism like JnodeFS, and we have the much aclaimed three tier application.

One of the realy wonderful things that should come with the abstraction of view and model is that we should be able to switch the model without loosing the data in the view. For example. A user uses a Document Edit model to write a text document. He should then be able to switch to a Document Spell-Check and spell check, then a Document Email, and send the document. This should happen without re-typeing the note, or opening it from harddrive.

This ability it not natively apparent in your system because each peice of the GUI is generated dynamically. Alow me to sugest this idea. That the user interaction does not start with just a text field and a combo box, but also a central component.

Ever application you use revolves around one important component. A text editor ueses JTextPane, a spreadsheet uses a JTable, a calculator uses a JTextField, a playlist uses a JList, and a paint program uses a Canvas. This list of key components does not excede eight or nine members.

If these few componets were displaid (as icons) along the top of the desktop, the user could select the one that he needs to use depending upon what he wants to do. This selection also gives us more specific model search capablities. The important idea here is that the central component (say a JTextPane) would stay inplace (and retain its internal data) as the model is switched arround it. This would allow the user to write a note, spellcheck it, save it, and email it, very easily from one frame.

There are still more issues to consider, but I will take a break now and let someone else have a chance.

Let me know what you think.



I don't wish to rain on your parade, but there is nothing unique about JNode that will make these ideas work. JNode is an attempt to make a type-safe modular Operating System in which development and maintenance is easy. The operating system is supposed to manage resources of the machine, and to organize some low level data structures, files, device drivers, etc... that can facilitate development at higher levels. JNode's current objective is to replicate a JRE.

You can right now go and write a User Interface system like you described above. There is no need to wait for JNode. JNode (when it is more complete) will provide you with a Java Runtime Environment. But you already have one, download one from Sun's website, and go ahead. Then when JNode is as strong as a JRE, you can plug in your user interface code, and it will be happily received in JNode.

I have heard others suggest that we need a 3-D user interface, or a ??? user interface, and I just wanted to toss in my two cents that

An Operating System is not It's User Interface.

try work with services in C/C++..

try do that..Is't painfull.JNode should not be just a VM runing directly on hardware. JNode should ba a platform to integrate and use the true power of modern java technologys.why do we have log4J integrated in JNode..or other java technologies? you knwo that this simple thing called logging in all other OSes it's a pin in the ass. JNode should try to bring the power of java technologies more into the light... you actualy think that without inovation we can atractusers? well..we won..check out if you wanna see what we are fighting against...

thats in my mind too

i think the same way, other things should be done before this.
Some things e.g. are :

- getting the filesystems to work (even Fat16 seems not to work corectly)
- write more docs (Multithreading / Synchronization)
- the L2 compiler
- more testing (-> fixing existing errors[like what i described with ide's])


agree but what does it have one with each other?

agree. but why can't we think about the future?

that was the rigth time -> future

that will be ok if we focus on the mentioned things now..
(Maybe some design questions of some new parts of JNode should include these future questions...)