Execution Model of the Android™ Architecture
Android is an open-source operating system, which is developed mainly by Google and other companies from the Open Handset Alliance like Sony, Samsung, LG and Nvidia. It was first released on the HTC Dream in 2008 and has become the dominant mobile operating system since then (2). Java is its main programming language, in which both the operating system and the applications are written. It is run in a different virtual machine for the bytecode execution than the standard Sun JVM, which has developed from a very optimised and basic version (3: p. 22) to include several specific performance increasing technologies like JIT and AOT compilation today. The entire platform architecture is built around memory and execution speed constraints as well as the primary goal of energy efficiency to increase the time between charging.
Android platform architecture
Android is based on a modified version of the Linux kernel, which has custom implementations for most libraries to increase the efficiency on a device with a small amount of memory and a slow processor (p. 18). It can be described as a software stack which is built up of several layers of abstraction, which for example hide the individual camera driver in the Linux kernel from the applications through the hardware abstraction layer. The top, there are the programs which communicate with the system managers, other applications or libraries through the Java API (4). Since the release of the Android Native Development Kit, performance critical code can also be written in C and C++ and native libraries in those languages included in the program. At the bottom of the stack is the power management system, which tries to reduce the consumption through optimised resource and thread management, power states as well as automatic low power modes for applications.
Types of compilation
Java is a language that runs in a virtual machine to increase the portability of code, which means that upon the initial compilation, the source code is only transformed into an intermediate bytecode format. As this format has to be translated into the platform-specific machine code instructions at runtime, virtual machines are slower and consume more energy than compiled languages in general. This execution model can be improved by introducing compilation at different points:
Just-In-Time compilation tries to identify the most used code paths at runtime and compile them to native code. While this method needs an additional overhead in processing time and memory to track the control flow, it still increases the performance. However, most of such code on Android already is executed in compiled native libraries, which reduces the speedup that can be achieved in applications (3: p. 22).
Ahead-Of-Time compilation is closer to traditional compiled languages and combines a lot of the goals of the motivations behind both execution models. The bytecode is compiled into native platform-specific machine code before its execution, ideally not at runtime. Android incorporates this method with its new runtime and performs most of the compilation at installation time. Therefore, the source code written for Dalvik is still portable to all Android systems, while the manufacturers of the hardware can specifically optimise their version of the compiler.
Android's Dalvik
Dalvik was the name of Android's first virtual machine but was discontinued with Android 5.0 Lollipop in 2014. Still, Dalvik shows the specific challenges faced by developers for mobile platforms. The Dalvik Virtual Machine runs a special byte-code which is optimised to be more space efficient through a smaller instruction set with 16 bits per instruction and a more compressed file structure (p. 19). Initially, Dalvik was only an interpreter of the bytecode without any JIT compilation. It is implemented such that the multiple separated instances of it can run with a minimum system RAM of 64 MB, of which only 20MB remain for programs once the operating system and high-level services have been loaded and started. Unlike the desktop Sun JVM, Dalvik does not use a memory-expensive stack data structure to represent its current state. Instead, it is register based, which makes it less flexible but more memory and energy efficient (p. 19).
JIT compilation was introduced to the DVM with Android version 2.2 and again is specifically designed to run on multiple virtual machines at the same time with very low memory usage. While the Sun JVM uses method bases JIT, Dalvik's is trace based. Method bases Just-In-Time compilation tracks the usage of different methods and compiles them at this coarse level as well. As this optimisation is very coarse, it required a larger amount of memory and compilation time while not always focussing on the most executed calls. Trace-based JIT only optimises at the control path level, which makes the compilation more efficient, as only relevant "hot" code is translated. Furthermore, the trace length is limited to 100 operations. This method, however, also makes reusing compiled code in other threads difficult. Furthermore, if an exception is thrown, the interpreter needs to take over after a rollback to the previous state, which can result in large overhead. Still, with those constraints, JIT was possible on low memory devices, and for example, the trace length could be extended on phones with larger memory availability (p. 23).
Optimisations to Android
The Android Runtime (5) was first introduced in Android 4.4 Kitkat and entirely replaced the Dalvik VM in version 5.0 Lollipop. Its most important improvement was the integration of AOT compilation. To ensure backward compatibility, programs still can use the same bytecode format as the DVM. At install time, this format is compiled into a new compiled binary format. When the application is used at runtime, it is checked whether all code exists in the compiled format. The final executed machine code can come from either there or can be substituted by interpreted cold code or JIT compiled hot code from the original bytecode. The new Just-In-Time compiler is also used to continuously improve the performance by analysing the control flow and using dynamic runtime information.
Similar to the implementation of the system drivers and native libraries like the OpenGL rendering API, programmers can use C and C++ to develop code that is always compiled and provides lower level functionality. Apart from the performance of critical sections like 3D rendering, the reusability of common external C libraries is reduced as well. One of the advantages is the manual memory management instead of Java's garbage collection. Especially in earlier versions of Android, using C achieved speedups between three and four times (7). However, with the AOT compilation in the Android runtime and the increased use of 64bit hardware, the gap has decreased. There are also use cases in which Java performs slightly better because of small optimisations. In general, the native code remains essential for high performance, energy efficient low-level tasks while Java still is used as the primary language for the applications.
One major disadvantage comes with the interoperability between the Java and native code. As most APIs like networking are implemented natively, most applications make use this functionality. A study by four researchers from the University of Southern California from 2014 (8) showed that about 85% percent of the energy is consumed by API calls, while only 2% come from the developers' bytecode on average. Of those, the most expensive were networking, the camera and SQLite database access. (p. 125)This problem comes not only from the high consumption of for instance Internet access, which can be improved in various ways. Furthermore, when the Java code calls native functions, the necessary libraries often need to be loaded at runtime. It is, therefore, important to both reduce the number of API calls as well as decrease the amount of energy needed for those calls.
Conclusion
Android as an operating system shows the high priorities of both reduced development difficulty and energy consumption. Even though Java as a language run in a virtual machine consumes more energy and memory than compiled languages like C, the continuous improvement of the Android specific VM to include both AOT and JIT compilation, as well as low memory overhead, have merged the two goals. While the ability to write native code is an opportunity for more energy efficient code, the expense of the interoperability still has a significant impact. To develop applications with a smaller footprint, it is, therefore, essential to reduce the number of API calls and work on improving the compilers for both Java and C.