Provides real-time {@link javolution.realtime.Context} for higher performance and higher predictability of Java bytecode execution.
This package provides thread-local {@link javolution.realtime.Context contexts} integrated with the Real-Time Specification for Java (RTSJ). They can be used to significantly decrease the "worst-case" execution time, leverage most of RTSJ capabilities and reduce/eliminate the need for object creation and garbage collection.
static
(allocated in ImmortalMemory
) and safely used by any real-time threads.
Similarly, contexts can be allocated in a specific memory area and their behavior is not affected
by the current thread execution area. In other words, NoHeapRealTimeThread
may run in ScopedMemory
and create (through factories) persistent objects from
a {@link javolution.realtime.PoolContext PoolContext} in ImmortalMemory
with
Javolution doing the recycling
(necessary as ImmortalMemory
is never garbage collected).static
class members
(automatically allocated in ImmortalMemory with RTSJ VMs):[code]
public class XmlFormat {
// RTSJ Unsafe! Memory leaks (when entries removed) or IllegalAssignmentError (when new entries while in ScopedArea).
static HashMapYou cannot get determinism using "any" library (including Java standard library) regardless of the garbage collector issue. Array resizing, lazy initialization, map rehashing (...) would all introduce unexpected delays (this is why Javolution comes with its own real-time collection implementation). Still, you have several options:
NoHeapRealtimeThread
at higher priority than
the garbage collector. This may require using pool contexts in ImmortalMemory
for persistency (as ScopedArea
can only be used for temporary objects).The basic idea is to associate objects pools to Java threads. These pools can be nested, with the heap being the root of all pools. You may consider pools' objects as part of the thread stack memory, with pools being pushed and popped as the thread enters/exits {@link javolution.realtime.PoolContext PoolContext}. To allocate from its stack, a thread needs to execute within a pool context and create new objects using {@link javolution.realtime.ObjectFactory factories} (the "new" keyword always allocates on the heap, Javolution does not/cannot change the Java Virtual Machine behavior). This mechanism is similar to the allocation on the stack of locally declared primitive variables, but now extended to non-primitive types.
Classes encapsulating calls to object factories within
factory methods (e.g. valueOf(...)
) and
whose methods do not create temporary objects on the heap
are known as "real-time compliant".
The simplest way is to extend {@link javolution.realtime.RealtimeObject RealtimeObject}
and use a factory to create new instances. For example:[code]
public static final class Coordinates extends RealtimeObject {
private double _latitude;
private double _longitude;
private static final Factory The following code shows the accelerating effect of stack allocations.[code]
public static void main(String[] args) {
ClassInitializer.initialize(PoolContext.class); // To avoid measuring class initialization time.
Coordinates[] vertices = new Coordinates[100000];
for (int i=0; i < 10; i++) {
long time = System.nanoTime();
PoolContext.enter();
try {
for (int j = 0; j < vertices.length; j++) {
vertices[j] = Coordinates.valueOf(i, j);
//vertices[j] = new Coordinates(i, j);
}
}finally {
PoolContext.exit();
}
time = System.nanoTime()-time;
System.out.println("Time = " + time / 1E6);
}
}[/code]
The first iteration is slower in this example because the pool context has not be loaded
at start-up (any context can be serialized/deserialized) and most objects are created
during the first iteration. The same program allocating directly on the heap (e.g.
Subsequent iterations are not only faster but
very consistent in time as no memory allocation/garbage collection
will ever occur.
Time = 36.576713
Time = 10.310808
Time = 10.024459
Time = 10.228395
Time = 10.078376
Time = 10.298236
Time = 10.480103
Time = 10.081449
Time = 10.236496
Time = 10.247391
new Coordinates(i, j)
)
produces the following result:
Time = 27.496512
Time = 13.557868
Time = 12.621716
Time = 51.020451
Time = 9.327443
Time = 8.798604
Time = 23.881527
Time = 8.829055
Time = 60.977303
Time = 7.708242
As you can see there is much more fluctuation in execution time due to
garbage collections interferences (making the average time several times greater).
Stack allocation is a simple and transparent way to make your methods "clean" (no garbage generated), it has also the side effect of making your methods faster and more time-predictable. If all your methods are "clean" then your whole application is "clean", faster and more time-predictable (aka real-time).
Not all VMs are created equals. The speed and real-time characteristics for object creation/garbage collection may vary significantly. By using stack allocation you ensure consistent behavior regardless of the client vm.
Applications may use the facility to different degrees. For example, to improve performance one might identify the biggest "garbage producers" and use stack allocations instead of heap allocations for those only. Others might want to run high priority threads in a pool context and by avoiding heap allocations (and potential GC wait), make these threads highly deterministic.
In practice, very few methods declare a pool context for local stack allocations, only the "dirty" ones (the one generating a lot of garbage). Iterations are often good candidates as they typically generate a lot of garbage. For example:[code] public Matrix pow(int exp) { PoolContext.enter(); // Starts local stack allocation. try { Matrix pow2 = this; Matrix result = null; while (exp >= 1) { // Iteration. if ((exp & 1) == 1) { result = (result == null) ? pow2 : result.times(pow2); } pow2 = pow2.times(pow2); exp >>>= 1; } return result.export(); // Exports result to outer stack (or heap). } finally { PoolContext.exit(); // Resets local stack (all temporary objects recycled at once). } }[/code]
For the very "dirty" (e.g. very long interations), one pool context might not be enough and may cause memory overflow. You might have to break down the iteration loop and use inner pool contexts. Also, by using multiple layers of small nested pool contexts instead of a single large pool, one keeps the pools' memory footprint very low and still benefits fully from the facility. Pools of a few dozens objects are almost as efficient as larger pools. This is because entering/exiting pool contexts is fast and the CPU cache is more effective with small pools.
Individual recycling is possible for methods having access
to the object pool. It is the case for code declaring {@link javolution.realtime.ObjectFactory factory}
instances (usually private) or for {@link javolution.realtime.RealtimeObject RealtimeObject}
sub-classes through the protected {@link javolution.realtime.RealtimeObject#recycle recycle} method.[code]
// Use of factory pool access to recycle immediately.
//
ObjectPool
No, as long as you {@link javolution.realtime.RealtimeObject#export export} or {@link javolution.realtime.RealtimeObject#preserve preserve} the objects which might be referenced outside of the pool context, immutable objects stay immutable! Furthermore, you do not have to worry about thread synchronization as stack objects are thread-local.
In practice, very few methods have to worry about these constraints. They are:
The methods with the pool context try, finally
block statement defined.
They have to ensure that objects created/modified inside the context scope and
accessible outside of the scope are {@link javolution.realtime.RealtimeObject#export exported}
(a return value typically).
The methods creating or modifying static
objects.
Becauses static
objects can be accessed from any thread, local objects
need to be {@link javolution.realtime.RealtimeObject#moveHeap moved to the heap}
or better {@link javolution.realtime.RealtimeObject#preserve preserved} when made accessible
from a static object (when using preserve, one will have to
{@link javolution.realtime.RealtimeObject#unpreserve unpreserved} at a later
time, typically when the preserved value is replaced).
For additional safety, IllegalAccessError are raised during execution when the rules above are broken.
In truth, object spaces promote the use of immutable objects (as their allocation cost is being significantly reduced), reduces thread interaction (e.g. race conditions) and often lead to safer, faster and more robust applications.
A resounding Yes! The easiest way is to ensure that all your threads run in a pool context, only static constants are exported to the heap and your system state can be updated without allocating new objects. This last condition is easily satisfied by using mutable objects or by preventing local (on the stack) immutable objects from being automatically recycled. The following illustrates this capability:[code] // This thread recycles its objects itself (very fast). class Navigator extends RealtimeThread { private Coordinates position = Coordinates.valueOf(0, 0); public void run() { while (true) { PoolContext.enter(); try { Coordinates newPosition = calculatePosition().preserve(); // On the stack. position.unpreserve(); // Old position to be recycled upon context exit. synchronized (this) { // Updates shared position. position = newPosition; } } finally { PoolContext.exit(); // Recycles all stack objects except the new position object } // (very fast, just a stack pointer reset). } } public synchronized Coordinates getPosition() { // On the stack of the calling thread. return position.copy(); } }[/code]
Some JDK library classes may create temporary objects on the heap and
therefore should be avoided or replaced by "cleaner" classes
(e.g. {@link javolution.util.FastMap FastMap} instead of java.lang.HashMap
,
{@link javolution.lang.TextBuilder TextBuilder} instead of java.lang.StringBuffer
(setLength(0)
allocates a new internal array) and
{@link javolution.lang.TypeFormat TypeFormat} for parsing/formatting of primitive types).
Classes avoiding dynamic memory allocation are significantly faster. For example, our {@link javolution.xml.sax.XmlSaxParserImpl XmlSaxParserImpl} and {@link javolution.xml.pull.XmlPullParserImpl XmlPullParserImpl} are 3-5x faster than any conventional xml parsers. To avoid synchronization issues, it is often easier to allocate new objects. Other techniques such as the "returnValue" parameter are particularly ugly and unsafe as they require mutability. Javolution's real-time facility promotes the dynamic creation of immutable objects as these object creations are fast and have no adverse effect on garbage collection. Basically, with pool contexts, the CPU is busy doing the "real thing" not "memory management"!
The cost of allocating on the heap is somewhat proportional
to the size of the object being allocated. By avoiding this
cost you can drastically increase the execution speed. The largest objects
benefit the most. For example, adding LargeInteger
in a pool context
is at least 8x faster than adding
java.math.BigInteger
,
our {@link javolution.lang.Text Text} class
can be several orders of magnitude faster
than java.lang.String
.
Not surprising when you know that even "empty" Strings
take 40 bytes of memory which have to be initialized and garbage collected!
Recycling objects is always more efficient than just recycling memory (aka GC). Our {@link javolution.util.FastMap FastMap} is a complex object using preallocated linked lists. It is fast but costly to build. Nevertheless, in a pool context it can be used as a throw-away map because the construction cost is then reduced to nothing!
The major issue with real-time applications is to ensure a minimal "worst-case" execution time. Avoiding unwanted pauses (GC or Just-In-Time compilation) is necessary but not sufficient. Object creation is still time consuming for large objects. By allowing object creation to occur at start-up for reuse latter, the "worst-case" execution time can be considerably reduced.
Sure, here is a code excerpt from a developer working on genetic algorithms:[code]
public static void main(String[] args) {
FastTable