The second half has been mixed, as was the first one. This morning's keynote was an endless stream of three-letter acronyms and buzzwords from a top Oracle executive, Tomas Kurian. He outlined how Oracle see the future of computing, which was quite painful to sit through. At least until an interesting announcement was made: Oracle buys Tangosol Coherence. Cameron outshone the Oracle guys with his demo, which while it wasn't very substantial, it did actually work. The Demo Devil apparently spares no one, not even senior Oracle engineers.
But the absolute highlight of the entire conference was the "Java performance myths - how do JVMs really work" session with Brian Goetz. He crammed in probably around 150% more words that the average speaker (he talks fast), and it was extremely valuable information. He went through a number of long-loived Java performance myths (object allocation is slow, garbage collection is slow, synchronizarion is slow etc) and explained why they are no longer true. His advice can be summarized into the following sentence:
The JVM is always smarter than you.
Java isn't an interpreted language, it's dynamically compiled, which enables the JVM (or the JIT compiler) to gather statistical data, at runtime, on how code is being executed to aid in optimization of the code. Also, the JVM is tuned to recognize commonly found patterns of execution and optimize them as best it can. This means that the more well-designed, clean code you write, the bigger the chance that the JVM (JIT compiler) will recognize code patterns and be able to do a good job at optimizing the code. Since the compilation into native code happens at runtime, and can even vary over time (code can be re-analyzed and re-compiled dynamically), it's very difficult to try and predict how the source code you write actually will be compiled and executed. This also has the side-effect that writing isolated benchmarks is often useless and/or misleading, since they don't properly reflect real-world usage. Just write as clean, readable and maintainable code as you can, and safely assume that the JVM will do a better job than you at optimization.
One interesting aspect of synchronization improvements in recent JVMs (Sun Java 6, I believe) is that there is no longer any performance difference between using StringBuffer (syncronized) and StringBuilder (not synchronized), since the JVM automatically detects that synchronization isn't needed, and the code executed will be equivalent. I can actually back this up with test data I gathered around the time I wrote about how the Java bytecode compiler automatically converted String concatenation using the + operator to StringBuffer/StringBuilder (depending on which version of javac used). I found no speed difference whatsoever between the two, which profiling a webapp during load-test with several hundred simultaneous threads, which at the time made me quite confused, but now I have an explanation. So the old truth "Concatenating Strings is slow, use StringBuffer" and the not so old truth "StringBuffer is slow, use StringBuilder" are now both false*, and can instead be reduced to "it doesn't matter at all". Since performance is equal, readability wins - use regular concatenation (+).
Of course, in most cases the application bottlenecks aren't in your code at all, but in I/O operations against databases etc.
I wrote a small benchmark (even though I just said that benchmarks are useless) that you can check out and play around with.
* if the JVM comes to the conclusion that StringBuffer synchronization isn't needed, which is often the case.