Ali Saif describes a state of mind, which has been predominant in my life, so well in The Plunge and Surface:

“The sharp awareness of the present-moment and spontaneity of emotional response is lost, made sluggish rather. I often find I smile at something a microsecond too late and then remain smiling while others have moved on.”

Except for me the “sharp awareness” was never there to be lost in the first place. It was there to discover in transient moments imbued with a sense of enlightenment.

And then there is Ali’s conclusion, which came to me like thunder from a clear sky:

“Quite understandably it leads to negative-spiral thought process and frustration.”

This has never been “quite understandable” to me. Maybe there really is a connection and I never saw it. Negativity and frustration are certainly far from foreign to me.

“I’ve noticed that the faculty deep thought, if you will, has remained intact through all of this.”

This on the other hand has always been clear as blue skies to me. Deep thought does not just remain. It is the only thing left in the house and gets full reign. This kind state of mind is the great enabler of deep thought. And the kind of “flow” that has always been the conductor of my most productive computer programming work.

I read an article on mindfulness once that associated mindfulness practices like meditation, which enhance awareness of one’s surroundings and ability to stay in the moment, with the infamous “flow” credited for hyper productive engineering work and such.

Based on my experience the article got things completely backwards. Computer work “flow” is exactly the opposite of mindfulness, and very much like the plunge that Ali describes.

Do not get me wrong. Mindfulness can result in a hyper increase in concentration and focus of attention and I have experienced this. Only not in a way that is conductive to computer work in flow.

I look forward to the day I get to experience computer work in a hyper productive mindful flow. For now all I can say is that computer work in flow quickly induces a “plunge” which can easily become a steady state of being. I am in it and I am surrounded by it in Silicon Valley.

Dart vs Java (cont’d) — Richards and Tracer

This week I managed to port the rest of Dart’s benchmark_harness examples to Java.

The experience of porting Richards and Tracer was as smooth as that of porting the DeltaBlue benchmark. The only unfamiliar (and interesting) Dart feature I encountered that is worth noting was the ability to declare and pass method parameters by name.

Here are the numbers:

Richards Benchmark

Tracer benchmark

The results this time are limited to two recent nightly releases of the Dart SDK (22577 and 22720), and 32-bit and 64-bit Java. I ran each benchmark 3 times to warm up, and then 5 more times and took the best time of the 5 runs as the final number you see on the charts.

I did a lot of experimenting based on feedback I got on the Dart mailing list. I am well aware that Java needs to run a method some number of times before the JIT kicks in, and of the caveats of OSR. However, in my tests I found no substantial differences when running the benchmarks with a longer warm-up time.

Java 8 results were not different enough to warrant inclusion in my tests. I test both Java and Dart VMs with default settings, and do not intend to tweak custom VM flags in order to optimize each VM. It is obvious that a lot can be done by tweaking various VM parameters. My goal here is to get a gut feel for the relative out-of-the-box performance.

I was told that for a fair comparison Dart should be evaluated against the 32-bit client JVM, as the Dart VM is also optimized for use on client devices (with more focus on things such as faster startup vs long-term throughput). Hence, I include the 32-bit Client JVM in my tests. However, for all practical purposes, 64-bit JVMs are more relevant and in-use nowadays, so I feel obliged to still include the 64-bit server JVM in my tests. There is no client version of the 64-bit JVM by the way. To be fair, 64-bit compilation does have the advantage of access to a much larger set of registers, which can be used to gain performance.

Dart vs Java — the DeltaBlue Benchmark

As of the time of this writing the performance page on tracks Dart VM performance as measured by the DeltaBlue benchmark.

I ported the benchmark_harness Dart package (including the DeltaBlue benchmark) into Java and ran against the latest Java 7 and 8 JDKs.

The experience of translating Dart to Java was surprisingly smooth. Some of the most common small porting tasks included

  • Dart bool to Java boolean;
  • Dart C++-like super call syntax;
  • Dart constructor syntactic sugar;
  • Dart shorthand (=>) functions to Java full format;
  • Wrapping Dart top-level functions and variables inside a Java top-level class;
  • Changing the use of the Dart Function type to a Java Runnable;
  • The Dart truncating division operator ~/, which apparently is equivalent to plain division (/) when applied to integers;
  • Dart list access [] operator to Java List.get()

The trickiest part of the translation was the following piece of code that appeared absolutely befuddling at first sight:


As it turns out, this is simply an array literal


prefixed by a generic type parameter specifying the type of the elements in the list


and followed by the list access ([]) operator, getting the element of the list at index value:


After working my way through this, the translation went smoothly, until I got to run the benchmark and hit a NullPointerException. In DeltaBlue, the BinaryConstraint constructor calls the addConstraint(), which is overridden in its subclasses. The ScaleConstraint sublcass implementation of addConstraint(), in particular, accesses ScaleConstraint fields that are initialized in the constructor. This pattern works in Dart, where apparently “this” constructor arguments are stored in their corresponding instance fields before the super constructor is invoked. Since this is not possible in Java (the super call must be the first statement in the constructor), I moved the addConstraint() call from BinaryConstraint to each of the subclass constructors. With that fix, the port was complete and I was able to run the Java version of the benchmark.

Here are the DeltaBlue numbers for Dart and Java on my ThinkPad W510:

Dart (22416)    2,810.39    355.82
Dart (22577)    2,283.11    438.00
Java (1.7.0_21-b11)    2,728.51    366.50
Java (1.8.0-ea)    2,693.14    371.31
Java (1.7 32-bit)    3,555.95    281.22

Update 5/11 More numbers: running for 45 seconds improves the performance of the 64-bit JVM (1.7,45s) but not the 32-bit one (1.7 32-bit,45s); the 32-bit Server JVM (32-server,45s) performs just as fast as the 64-bit JVM; the xxgreg version (xxgreg,45s) of DeltaBlue runs slower on the 64-bit JVM than my version ported from Dart; the xxgreg benchmark (xxgreg-run) uses a different harness and measurements include VM startup and warmup time.

Java (1.7 32-bit,45s)    3,533.99    282.97
Java (32-server,45s)    2,701.67    370.14
Java (1.7,45s)    2,559.38    390.72
Java (xxgreg,45s)    2,780.61    359.63
Dart (xxgreg-run)    2,356.70    424.32
Java (xxgreg-run)    2,800.10    357.13

The number in the first column is the runtime in us as reported by the benchmark harness at the end of a run. The second number is the score as defined on the performance page: “runs/second.” I ran the benchmark on each VM multiple times and as the variance between runs was small enough I picked the result from a random execution for each VM.

The first Dart VM (22416) is the current public release available on the Dart website, while 22577 is the current nightly build. I included the nightly build, as it is clearly visible on the performance page that Dart saw a major gain in performance as of build 22437. My test confirmed this observation.

The results are truly impressive. Dart, still a baby at 2 years of age and pre-1.0, already exhibits 15% better performance than Java, a veteran of 18 years. I think this truly deserves to be called a case of David vs Goliath.

Update: Both Dart VMs tested are 32-bit, while the two original Java VMs are 64-bit. Tested with the 32-bit Java 1.7.0_21 VM with even more disappointing results.

The History of Many-core

When looking for a good reference to back the “many-core problem” assertion in my Master’s thesis, this the best I could find as a prime source. Multicore: Fallout of a Hardware Revolution holds an excellent description of the reasons behind the shift from increasing clock speeds to multiplying the numbers of cores in modern CPUs.

In particular:

“Hidden concurrency burns power
 Speculation, dynamic dependence checking, etc.
 Push parallelism discovery to software (compilers and
application programmers) to save power”

…and a hidden treasure of information on the history of all modern processor architecture optimization techniques.

Value Objects in Newspeak

This is a quick dump of a rough design sketch for Value objects in Newspeak, which builds upon section 3.1.1 of the current version of the Newspeak language specification.

  1. Value classes allow explicit intent. The class declaration is automatically annotated with metadata that expresses the intent for instances to be value objects.
  2. Value classes use special syntax that introduces the said metadata annotation (e.g. valueclass X instead of class X).
  3. Value classes can only be mixed in with other Value classes.
  4. Value classes can only have immutable slots.
  5. The root of the value classes is Value, which extends from Object. The Value class overrides the ==  method and delegates it to =. The Value class overrides = to compare all the slots recursively using =. The Value class overrides the asString method to give a neat stringified representation of the Value object in a JSON-like format. Value class computations for =, asString bottom out on built-in Value classes, like Number, Character, String etc. (overriding = and asString is explicitly inspired by the behavior of case classes in Scala).
  6. The Value class overrides the identityHash method to delegate to the hash method, and overrides the hash methods with some simple, yet-to-be-determined, recursive hashing algorithm (e.g. XOR-ing the hashes of all the slots).
  7. Value objects can only point to other Value objects.
  8. Value class declarations can only be nested inside other Value class declarations.
    Update 2/10/2012: Another option that seems very attractive right now would be to allow value class declarations to be lexically nested inside non-value class declarations but cut off from the non-value part of their lexical scope (the enclosing object chain stops at the outermost value class, excluding all enclosing non-value classes).
  9. This implies the enclosing object of a Value class is always a value object.
  10. Simply annotating a class declaration as “<Value>” is not enough. Syntax is required for valueclass declarations in order to ensure that Value classes always extend other Value classes. This allows a Value class with no explicit superclass clause to implicitly extend the built-in Value class, instead of Object, which is the default superclass for regular classes.
  11. The constraints on Value objects and Value classes are verified at mixin application time (the superclass is a Value class), and object construction time (all slots contain other Value objects).
  12. The enclosing object does not need to be verified at mixin application time, because the enclosing scope of a Value class declaration can be verified at compile time.
  13. Value classes are also Value objects.
  14. nil is a Value object.
  15. Value class declarations can contain nested non-value (regular) class declarations. More generally speaking, Value objects can produce (act as factories for) non-value objects.
    Update 2/10/2012: An important corollary of the above is that non-value classes enclosed in a value object are value objects themselves.
  16. Value objects are awesome! They are containers for data and the unit of data transfer between Actors in Newspeak, and also the building block for immutable data structures.
  17. Update 11/24/2011:
  1. Every class whose enclosing object is a value object is also a value object (but not necessarily a value class!).
    Update 11/27/2011:
    Justification for the above is: if multiple equivalent instances of a value class are indistinguishable, then all of the instances’ constituent parts, nested classes included, must be indistinguishable as well. Think (a == b), but (a NestedClass == b NestedClass) not – this is unacceptable!
  2. We must determine rules for when closure and activation objects are value objects, so we can safely deal with simultaneous slots in value classes (at construction time, the closure object that captures each simultaneous slot initializer must be a value object, then at lazy evaluation time, the result must be a value object, otherwise an exception is thrown and the simultaneous slot is not resolved).
    Update  2/10/2012: One alternative that comes to mind but does not seem very attractive would be to have special syntax for closures that are value objects, say {{ … }} denotes a closure that is always a value object and has no access to enclosing mutable state.
    A more attractive alternative would be to extend the syntax for object literals to support value object literals. All of a sudden object literals appear much more important than before. For example, value-object closures and/or object literals make it possible to build  a Scala-like parallel collections library on top of actors.
    Actually the above is not quite correct: a Scala-like parallel collections library in newspeak would benefit more from value class literals that can be nested inside non-value classes

References and Actors

In E, references are distinct from the objects they designate. This might seem apparent, but it is not necessarily so. In traditional languages like Java, first-class references are almost indistinguishable from the objects they designate. They are internally represented as 4- to 8-byte pointers and while there is a distinction between reference equality (two references pointing to the same object) and object equality (two distinct objects with identical/indistinguishable contents and behavior), there is not much else to worry about.

In E, however, the rabbit hole goes deeper. There are multiple *types* of references. The word type might be a misnomer but I find it as one good way to think about references. In the E thesis, there is no discussion of types of references, but rather states that a reference can be in:

  • Local Promise
  • Remote Promise
  • Near Reference
  • Far Reference
  • Broken Reference

In this discussion reference type is a synonym for reference state. The reason the term state seems more appropriate is that a single reference goes through several transitions between states in the course of its lifetime. In other words, a reference can switch types.

The problem this poses for an implementation is that references in different states hold different information. A near reference is the simplest case, the familiar reference from Java – it holds the address of an object within the current VM’s heap. A promise however, holds an unbounded list of pending messages and whenResolved listeners. A far reference holds whatever information is necessary to transmit messages to its target object, including potentially a distinct queue of messages pending delivery. A broken reference holds exception information regarding the reason for the reference breakage.

Classes are a natural way to think about implementing each different state of a promise – the information and behavior for each state of a reference is represented by a distinct class. The problem arises when the reference needs to switch states, therefore the need arises for an object to change its class dynamically, which is not traditionally available functionality in object-oriented programming languages.

Another problem is the possibility that references might chain. In other words, the possibility that a reference might point to another reference, instead of directly pointing to an object. A promise might get resolved to another promise. Or, even more disturbingly, a far reference might point to a promise reference. This possibility of chaining is actually excluded in the reference states model presented by Miller. Instead of a promise resolving to another promise, the promise reference simply makes a state transition, or in other words, the reference becomes the other promise, instead of pointing to it. In a similar fashion, a promise will get deserialized as a promise for the same result as the original promise, instead of being deserialized as a far reference to a promise (which would introduce chaining of references).

This, in essence, leads to an important conclusion. The serialization/deserialization implementation must include special handling logic for references in different states. For instance:

  • Near reference might have to be deserialized as a far reference
  • Near promise is deserialized as a remote promise linked to the same resolver as the original near promise


Furthermore, the resolution logic must handle the distinct cases as well, in order to handle the different state transitions possible that originate from the promise states:

  • Become another promise (for a new result)
  • Resolve to and become a far or near reference

As already discussed, the most natural way to implement the different reference states is as classes. At this point the notion of reference states as types comes into the picture. References are objects in our runtime VM but they are a distinct type of proxy objects that provide special services and require distinct treatment from regular application objects. As explained above, for deserialization and resolution purposes, we need to be able to distinguish between a near reference to an application object and a near reference to a reference object (since fundamentally the runtime VM only provides primitive support for near references, and the other reference states are reified as regular objects). Furthermore, it is clear from the examples above that we also need to be able to distinguish between the different *types* of reference states (Local Promise vs Remote Promise vs Far Reference etc).

Since in Newspeak, there is no global namespace for classes, and at runtime all classes are simply a dynamic aggregation of mixin applications, we cannot test the class names of objects. Conceptually this would be equivalent to having some sort of type system anyway. But this is exactly what we need – to be able to distinguish between different types of objects (one per reference state), albeit a very restricted set.

Since the Past and Actors libraries are a core part of the language, I propose to meet the need for type checking using the following idiom, which is slightly different from the is* message idiom for arbitrary objects, already implemented in the NewseakObject doesNotUnderstand: protocol. The idea is that since the Past and Actors libraries are singleton modules and the sole managers of instances of objects that represent reference states, and not likely candidates for extension by applications, we can simply test for class equality like this:

(obj class = Promise)

where obj is an object whose type is being tested and Promise is a reference to the class instance local to the current singleton module instance. Naturally,

Promise new

is exclusively used to construct Promises from within the Past module, for example.

On Openness

I am a firm believer in openness. That is the reason I believe open source has such great value. The way I see it, the word open in “open source” does not just refer to the source code: it also means open communication, open structure, open management… openness in every aspect of a project.

Yet, in one of my own projects I failed to abide by my own principle. Two years ago, at the end of the summer of 2008, I left my small GSoC project – a split editor for Eclipse, in the state of a working prototype to begin a full-time job and join a master’s program. For two years now I have neglected the split editor project and kept in complete silence to the point that people have even forgotten that there was ever anyone involved in this effort.

I am now nearing the end of my master’s program and with all classwork completed am getting ready to begin work on a thesis. Before I do that, however, I wanted to clarify the state of affairs of my split editor work. I have not forgotten about it, and I am determined to complete it eventually. Although I will not have any time to dedicate to this work for another year, it is first on my queue of side projects after completing my master’s.

Actually, if anyone is willing to pick up the work now, I will be more than willing to provide whatever support I can. Here are the patches with my latest work, updated against the current Eclipse release as of the time of writing – 3.6.1 (R3_6_1 label in CVS):

The file contains four separate patches for the four Eclipse plugin projects involved in the split editor implementation:

  • org.eclipse.ui.workbench – this project contains the bulk of the split editor work.
  • org.eclipse.ui,
  • org.eclipse.ui.editors,
  • org.eclipse.jdt.ui – the above three projects contain mostly configuration changes to activate the split editor for Java and Text editors.

To see the split editor in action, check out these four projects from the Eclipse CVS repo (at label R3_6_1), apply the patches and start up an Eclipse launch configuration. If you want to try this but get lost or none of this makes sense, post a comment here and I will be happy to provide more detail.

I have always wanted to make it very easy for people to try out and experience the split editor at the earliest possible stage of its development (at which it stands currently – there are quite a few known bugs). The best way I see for this would be to share a custom build of Eclipse with the split editor work compiled in. Unfortunately, I have never been able to successfully build Eclipse from source. I gave it a shot two years ago, and more recently, I spent the last two months frantically trying to build the Eclipse 3.6/3.5/3.7 SDKs, without any success. It seems like I am not alone in this. If at any point fortune strikes me, you can be certain that Eclipse packages with split editor support will appear here immediately. If anyone is willing and able to help with this, please get in touch!


Integrated split editor prototype

A new split editor prototype that is integrated into the Eclipse workbench API has been available as a patch on the Eclipse Bugzilla for some time now. Unfortunately, I have not able to create an easily deployable plugin that can be installed easily in any Eclipse 3.4 distribution (by copying to the dropins or plugins directories). Even if I did I would probably have to worry about the legal aspects of redistributing Eclipse code, since the plugin will no longer only contain my code.

That said, you can test out the latest split editor by checking out the o.e.ui.workbench project in Eclipse, applying the patch from bugzilla, and debuggin Eclipse from Eclipse. There are no immediately visible changes, but it would be very helpful to have as many people as possible test the split editor in everyday use. I do not expect a lot of users to go through the above procedure, however, and so I will keep trying to get an easily installable package out.

Next time I will talk about the internal integration design of the split editor because I believe this is the most interesting part of all. This will be of particular interest to Eclipse plugin developers who want to know how to enable editor splitting for their custom editors.


There is no new Java split editor yet.

I got overwhelmed by new issues that I discovered while testing last week’s prototype including the fact that most preference changes do not propagate to both editors or cause exceptions.

I am looking into the alternative MultiEditor-based approach and this is leading me to some interesting ideas that I am about to try out… so there should be interesting split-editor-related stuff coming soon.

Working prototype of Eclipse split editor

The first working prototype of a working split editor is here. You can download it and play with it in Eclipse 3.4.

Basic functions like text cursor, current line highlight (these did not work in the first prototype), undo/redo, cut/copy/paste, status bar items (insert mode, line and column numbers) work fine and stay in sync when switching between the top and bottom half of the split editor.

There are also some known issues:

  • Ctrl+Left/Right/Home/End keyboard shortcuts always operate on the top part of the split editor, even if the cursor is (blinking) in the bottom half.
  • Navigating back and forth using the Alt-Left/Right keys browser-style does not remember the correct editor along with the location and doe not move between the top/bottom half of the editor. I actually have no plans of fixing this. This problem also occurs when opening multiple editor tabs.

Here are some images that demonstrate how to open and use the split editor.

First select the Open With dialog…
Split Editor 1

Then select the [Split Text Editor] option…

Split the Editor by clicking and dragging on the narrow line that runs all the way across above the first line.

And there you go… long live the split editor!

You can try different commands out and see if they work when you move between the two sides (top and bottom) of the editor…

That’s it for now. Enjoy!

Bonus The plugin now includes source code so you can see the kind of monster that lurks in the background to make editor splitting work.

Up Next Next up is a java code splitter, after I resolve the remaining known issues with basic editing functionality. I will also put up an update site for faster/easier delivery of future updates.

First split editor prototypes available

In the true spirit of “release early, release often” I am making my first prototype of a split editor available now as an Eclipse plugin. The plugin works in the new Eclipse 3.4 Ganymede. Download, place in your Eclipse dropins directory and you’re set to go.

Instructions on how to test the plugin are available on the wiki.

Keep in mind that this is a very early prototype and it barely works. There are a lot of issues. If you do download it, please feel free to share your experience with me by posting your (encouraging/bashful/hateful/frustrated/…) comment to this blog or the wiki.

There will be lots more in the coming weeks, so stay tuned!