Cliff Click
Über Conf
Denver · June 14 - 17, 2010

CTO & Co-Founder of 0xdata
Cliff Click is the CTO and Co-Founder of 0xdata, a firm dedicated to creating a new way to think about web-scale data storage and real-time analytics. Cliff wrote his first compiler when he was 15 (Pascal to TRS Z-80!), although my most famous compiler is the HotSpot Server Compiler (the Sea of Nodes IR). I helped Azul Systems build an 864 core pure-Java mainframe that keeps GC pauses on 500Gb heaps to under 10ms, and worked on all aspects of that JVM. Before that Cliff worked on HotSpot at Sun Microsystems, and am at least partially responsible for bringing Java into the mainstream.
Cliff is invited to speak regularly at industry and academic conferences and has published many papers about HotSpot technology. He holds a PhD in Computer Science from Rice University and about 15 patents.
Presentations
The Art of (Java) Benchmarking
People write toy Java benchmarks all the time. Nearly always they “get it wrong” – wrong in the sense that the code they write doesn't measure what they think it does. Oh, it measures something all right – just not what they want. This session presents some common benchmarking pitfalls, demonstrating pieces of real, bad (and usually really bad) benchmarks. The session is for any programmer who has tried to benchmark anything. It provides specific advice on how to benchmark, stumbling blocks to look out for, and real-world examples of how well-known benchmarks fail to actually measure what they intended to measure.
A Crash Course in Modern Hardware
I walk through a tiny performance example on a modern out-of-order CPU, and basically show that (1) single-threaded performance is tapped out, (2) all the action is with multi-threaded programs and (3) the memory subsystem.
Fast Bytecodes for Funny Languages
There are several languages that target bytecodes and the JVM machine as their new “assembler,” including Scala, Clojure, Jython, JRuby, the JavaScript programming language/Rhino, and JPC. This session takes a quick look at how well these languages sit on a JVM machine, what their performance is, where it goes, and why.
Challanges and Directions in Java Virtual Machines
Available core counts are going up, up, up! Intel is shipping quad-core chips; Sun’s Rock has (effectively) 64 CPUs and Azul’s hardware nearly a thousand cores. How do we use all those cores effectively? The JVM proper can directly make use of a small number of cores (JIT compilation, profiling), and garbage collection can use about 20 percent more cores than the application is using to make garbage–but this hardly gets us to four cores. Application servers and transactional—J2EE/bean–applications scale well with thread pools to about 40 or 60 CPUs, and then internal locking starts to limit scaling. Unless your application has embarrassingly parallel data (e.g. data mining; risk analysis; or, heaven forbid, Fortran-style weather-prediction), how can you use more CPUs to get more performance? How do you debug the million-line concurrent program?