1. 开始之前
手写笔是一种笔形工具,有助于用户执行精确的任务。在本代码实验室中,您将学习如何使用 android.os
和 androidx
库实现有机的手写笔体验。您还将学习如何使用 MotionEvent
类来支持压力、倾斜和方向,以及使用手掌抑制来防止意外触碰。此外,您还将学习如何使用运动预测来减少手写笔延迟,以及使用 OpenGL 和 SurfaceView
类来实现低延迟图形。
先决条件
- 熟悉 Kotlin 和 lambda 表达式。
- 了解如何使用 Android Studio 的基本知识。
- Jetpack Compose 的基本知识。
- 了解 OpenGL 的基本知识,用于实现低延迟图形。
您将学到什么
- 如何使用
MotionEvent
类来处理手写笔。 - 如何实现手写笔功能,包括支持压力、倾斜和方向。
- 如何在
Canvas
类上绘制。 - 如何实现运动预测。
- 如何使用 OpenGL 和
SurfaceView
类来渲染低延迟图形。
您需要什么
- 最新版本的 Android Studio.
- 熟悉 Kotlin 语法,包括 lambda 表达式。
- Compose 的基本经验。如果您不熟悉 Compose,请完成 Jetpack Compose 基础知识 代码实验室。
- 支持手写笔的设备。
- 一个有效的手写笔。
- Git。
2. 获取入门代码
要获取包含入门应用的主题和基本设置的代码,请按照以下步骤操作
- 克隆此 GitHub 存储库
git clone https://github.com/android/large-screen-codelabs
- 打开
advanced-stylus
文件夹。start
文件夹包含入门代码,而end
文件夹包含解决方案代码。
3. 实现一个基本绘图应用
首先,您将构建一个基本绘图应用所需的布局,该应用允许用户绘制,并使用 Canvas
Composable
函数在屏幕上显示手写笔属性。它看起来像下图
上半部分是 Canvas
Composable
函数,您将在其中绘制手写笔可视化效果,并显示手写笔的不同属性,例如方向、倾斜和压力。下半部分是另一个 Canvas
Composable
函数,它接收手写笔输入并绘制简单的笔触。
要实现绘图应用的基本布局,请按照以下步骤操作
- 在 Android Studio 中,打开克隆的存储库。
- 单击 **
app
** **>** **java
** **>** **com.example.stylus
**,然后双击 **MainActivity
**。MainActivity.kt
文件将打开。 - 在
MainActivity
类中,注意StylusVisualization
和DrawArea
Composable
函数。在本节中,您将重点关注DrawArea
Composable
函数。
创建 StylusState
**类**
- 在同一个
ui
目录中,单击 **文件 > 新建 > Kotlin/类文件**。 - 在文本框中,将 **名称** 占位符替换为
StylusState.kt
,然后按Enter
(或在 macOS 上按return
)。 - 在
StylusState.kt
文件中,创建StylusState
数据类,然后添加下表中的变量
变量 | 类型 | 默认值 | 描述 |
|
| 一个范围在 0 到 1.0 之间的数值。 | |
|
| 一个范围在 -pi 到 pi 之间的弧度值。 | |
|
| 一个范围在 0 到 pi/2 之间的弧度值。 | |
|
| 使用 |
StylusState.kt
package com.example.stylus.ui
import androidx.compose.ui.graphics.Path
data class StylusState(
var pressure: Float = 0F,
var orientation: Float = 0F,
var tilt: Float = 0F,
var path: Path = Path(),
)
- 在
MainActivity.kt
文件中,找到MainActivity
类,然后使用mutableStateOf()
函数添加手写笔状态
MainActivity.kt
import androidx.compose.runtime.setValue
import androidx.compose.runtime.getValue
import androidx.compose.runtime.mutableStateOf
import com.example.stylus.ui.StylusState
class MainActivity : ComponentActivity() {
private var stylusState: StylusState by mutableStateOf(StylusState())
DrawPoint
类
DrawPoint
类存储有关在屏幕上绘制的每个点的相关数据;当您将这些点连接起来时,便会创建线条。它模仿了 Path
对象的工作方式。
DrawPoint
类扩展了 PointF
类。它包含以下数据
参数 | 类型 | 描述 |
|
| 坐标 |
|
| 坐标 |
|
| 点的类型 |
存在两种类型的 DrawPoint
对象,它们由 DrawPointType
枚举描述
类型 | 描述 |
| 将行首移动到某个位置。 |
| 从前一个点开始绘制一条线。 |
DrawPoint.kt
import android.graphics.PointF
class DrawPoint(x: Float, y: Float, val type: DrawPointType): PointF(x, y)
将数据点渲染成路径
对于此应用程序,StylusViewModel
类保存线的數據,准备用于渲染的数据,并对 Path
对象执行一些用于手掌拒绝的操作。
- 为了保存线的數據,在
StylusViewModel
类中,创建一个DrawPoint
对象的可变列表
StylusViewModel.kt
import androidx.lifecycle.ViewModel
import com.example.stylus.data.DrawPoint
class StylusViewModel : ViewModel() {private var currentPath = mutableListOf<DrawPoint>()
要将数据点渲染成路径,请按照以下步骤操作
- 在
StylusViewModel.kt
文件的StylusViewModel
类中,添加一个createPath
函数。 - 创建一个
path
变量,类型为Path
,使用Path()
构造函数。 - 创建一个
for
循环,在循环中遍历currentPath
变量中的每个数据点。 - 如果数据点类型为
START
,则调用moveTo
方法,以在指定的x
和y
坐标处开始一条线。 - 否则,使用数据点的
x
和y
坐标调用lineTo
方法,以连接到前一个点。 - 返回
path
对象。
StylusViewModel.kt
import androidx.compose.ui.graphics.Path
import com.example.stylus.data.DrawPoint
import com.example.stylus.data.DrawPointType
class StylusViewModel : ViewModel() {
private var currentPath = mutableListOf<DrawPoint>()
private fun createPath(): Path {
val path = Path()
for (point in currentPath) {
if (point.type == DrawPointType.START) {
path.moveTo(point.x, point.y)
} else {
path.lineTo(point.x, point.y)
}
}
return path
}
private fun cancelLastStroke() {
}
处理 MotionEvent
**对象**
手写笔事件通过 MotionEvent
对象传进来,这些对象提供有关所执行的操作以及与其相关联的數據的信息,例如指针的位置和压力。下表包含 MotionEvent
对象的一些常量及其數據,您可以使用这些信息来识别用户在屏幕上的操作
常量 | 數據 |
| 指针接触屏幕。这是 |
| 指针在屏幕上移动。这是绘制的线。 |
| 指针停止接触屏幕。这是线的终点。 |
| 检测到不需要的触摸。取消上一次笔画。 |
当应用程序收到新的 MotionEvent
对象时,屏幕应该渲染以反映新的用户输入。
- 要在
StylusViewModel
类中处理MotionEvent
对象,请创建一个收集线坐标的函数
StylusViewModel.kt
import android.view.MotionEvent
class StylusViewModel : ViewModel() {
private var currentPath = mutableListOf<DrawPoint>()
...
fun processMotionEvent(motionEvent: MotionEvent): Boolean {
when (motionEvent.actionMasked) {
MotionEvent.ACTION_DOWN -> {
currentPath.add(
DrawPoint(motionEvent.x, motionEvent.y, DrawPointType.START)
)
}
MotionEvent.ACTION_MOVE -> {
currentPath.add(DrawPoint(motionEvent.x, motionEvent.y, DrawPointType.LINE))
}
MotionEvent.ACTION_UP -> {
currentPath.add(DrawPoint(motionEvent.x, motionEvent.y, DrawPointType.LINE))
}
MotionEvent.ACTION_CANCEL -> {
// Unwanted touch detected.
cancelLastStroke()
}
else -> return false
}
return true
}
将數據发送到 UI
要更新 StylusViewModel
类,以便 UI 可以收集 StylusState
数据类中的更改,请按照以下步骤操作
- 在
StylusViewModel
类中,创建一个_stylusState
变量,类型为MutableStateFlow
,数据类型为StylusState
类,以及一个stylusState
变量,类型为StateFlow
,数据类型为StylusState
类。_stylusState
变量在StylusViewModel
类中每次手写笔状态发生更改时都会被修改,stylusState
变量会被MainActivity
类中的 UI 使用。
StylusViewModel.kt
import com.example.stylus.ui.StylusState
import kotlinx.coroutines.flow.MutableStateFlow
import kotlinx.coroutines.flow.StateFlow
class StylusViewModel : ViewModel() {
private var _stylusState = MutableStateFlow(StylusState())
val stylusState: StateFlow<StylusState> = _stylusState
- 创建一个
requestRendering
函数,该函数接受一个StylusState
对象参数
StylusViewModel.kt
import kotlinx.coroutines.flow.update
...
class StylusViewModel : ViewModel() {
private var _stylusState = MutableStateFlow(StylusState())
val stylusState: StateFlow<StylusState> = _stylusState
...
private fun requestRendering(stylusState: StylusState) {
// Updates the stylusState, which triggers a flow.
_stylusState.update {
return@update stylusState
}
}
- 在
processMotionEvent
函数的末尾,添加一个带有StylusState
参数的requestRendering
函数调用。 - 在
StylusState
参数中,从motionEvent
变量中检索倾斜、压力和方向值,然后使用createPath()
函数创建路径。这将触发一个流事件,您将在 UI 中将该事件连接起来。
StylusViewModel.kt
...
class StylusViewModel : ViewModel() {
...
fun processMotionEvent(motionEvent: MotionEvent): Boolean {
...
else -> return false
}
requestRendering(
StylusState(
tilt = motionEvent.getAxisValue(MotionEvent.AXIS_TILT),
pressure = motionEvent.pressure,
orientation = motionEvent.orientation,
path = createPath()
)
)
将 UI 与 StylusViewModel
类连接
- 在
MainActivity
类中,找到onCreate
函数的super.onCreate
函数,然后添加状态收集。要详细了解状态收集,请参阅 在生命周期感知方式下收集流。
MainActivity.kt
import androidx.lifecycle.lifecycleScope
import kotlinx.coroutines.launch
import androidx.lifecycle.repeatOnLifecycle
import kotlinx.coroutines.flow.onEach
import androidx.lifecycle.Lifecycle
import kotlinx.coroutines.flow.collect
...
class MainActivity : ComponentActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
lifecycleScope.launch {
lifecycle.repeatOnLifecycle(Lifecycle.State.STARTED) {
viewModel.stylusState
.onEach {
stylusState = it
}
.collect()
}
}
现在,每当 StylusViewModel
类发布新的 StylusState
状态时,活动都会接收它,新的 StylusState
对象会更新本地 MainActivity
类的 stylusState
变量。
- 在
DrawArea
Composable
函数的主体中,将pointerInteropFilter
修饰符添加到Canvas
Composable
函数中,以提供MotionEvent
对象。
- 将
MotionEvent
对象发送到 StylusViewModel 的processMotionEvent
函数进行处理
MainActivity.kt
import androidx.compose.ui.ExperimentalComposeUiApi
import androidx.compose.ui.input.pointer.pointerInteropFilter
...
class MainActivity : ComponentActivity() {
...
@Composable
@OptIn(ExperimentalComposeUiApi::class)
fun DrawArea(modifier: Modifier = Modifier) {
Canvas(modifier = modifier
.clipToBounds()
.pointerInteropFilter {
viewModel.processMotionEvent(it)
}
) {
}
}
- 使用
stylusState
path
属性调用drawPath
函数,然后提供颜色和笔触样式。
MainActivity.kt
class MainActivity : ComponentActivity() {
...
@Composable
@OptIn(ExperimentalComposeUiApi::class)
fun DrawArea(modifier: Modifier = Modifier) {
Canvas(modifier = modifier
.clipToBounds()
.pointerInteropFilter {
viewModel.processMotionEvent(it)
}
) {
with(stylusState) {
drawPath(
path = this.path,
color = Color.Gray,
style = strokeStyle
)
}
}
}
- 运行应用程序,然后您会注意到您可以在屏幕上绘制。
4. 实现对压力、方向和倾斜的支持
在上一节中,您了解了如何从 MotionEvent
对象中检索手写笔信息,例如压力、方向和倾斜。
StylusViewModel.kt
tilt = motionEvent.getAxisValue(MotionEvent.AXIS_TILT),
pressure = motionEvent.pressure,
orientation = motionEvent.orientation,
但是,此快捷方式仅适用于第一个指针。当检测到多点触摸时,会检测到多个指针,此快捷方式仅返回第一个指针的值(或屏幕上的第一个指针)。要请求有关特定指针的數據,可以使用 pointerIndex
参数
StylusViewModel.kt
tilt = motionEvent.getAxisValue(MotionEvent.AXIS_TILT, pointerIndex),
pressure = motionEvent.getPressure(pointerIndex),
orientation = motionEvent.getOrientation(pointerIndex)
要详细了解指针和多点触摸,请参阅 处理多点触摸手势。
添加压力、方向和倾斜的可视化
- 在
MainActivity.kt
文件中,找到StylusVisualization
Composable
函数,然后使用StylusState
流对象的信息来渲染可视化
MainActivity.kt
import StylusVisualization.drawOrientation
import StylusVisualization.drawPressure
import StylusVisualization.drawTilt
...
class MainActivity : ComponentActivity() {
...
@Composable
fun StylusVisualization(modifier: Modifier = Modifier) {
Canvas(
modifier = modifier
) {
with(stylusState) {
drawOrientation(this.orientation)
drawTilt(this.tilt)
drawPressure(this.pressure)
}
}
}
- 运行应用程序。您会在屏幕顶部看到三个指示器,分别指示方向、压力和倾斜。
- 用您的手写笔在屏幕上涂鸦,然后观察每个可视化如何对您的输入做出反应。
- 检查
StylusVisualization.kt
文件以了解每个可视化是如何构建的。
5. 实现手掌拒绝
屏幕可能会注册不需要的触摸。例如,当用户在手写时自然地将手放在屏幕上以支撑时,就会发生这种情况。
手掌拒绝是一种机制,可以检测到这种行为并通知开发人员取消最后一组 MotionEvent
对象。一组 MotionEvent
对象以 ACTION_DOWN
常量开头。
这意味着您必须维护输入的历史记录,以便您可以从屏幕中删除不需要的触摸并重新渲染合法的用户输入。幸运的是,您已经在 StylusViewModel
类的 currentPath
变量中存储了历史记录。
Android 提供了 MotionEvent
对象的 ACTION_CANCEL
常量,以通知开发人员有关不需要的触摸。从 Android 13 开始,MotionEvent
对象提供了 FLAG_CANCELED
常量,应该在 ACTION_POINTER_UP
常量上检查该常量。
实现 cancelLastStroke
函数
- 要从最后一个
START
数据点中删除数据点,请返回StylusViewModel
类,然后创建一个cancelLastStroke
函数,该函数找到最后一个START
数据点的索引,并且只保留从第一个数据点到索引减一的數據
StylusViewModel.kt
...
class StylusViewModel : ViewModel() {
...
private fun cancelLastStroke() {
// Find the last START event.
val lastIndex = currentPath.findLastIndex {
it.type == DrawPointType.START
}
// If found, keep the element from 0 until the very last event before the last MOVE event.
if (lastIndex > 0) {
currentPath = currentPath.subList(0, lastIndex - 1)
}
}
添加 ACTION_CANCEL
和 FLAG_CANCELED
**常量**
- 在
StylusViewModel.kt
文件中,找到processMotionEvent
函数。 - 在
ACTION_UP
常量中,创建一个canceled
变量,该变量检查当前 SDK 版本是否为 Android 13 或更高版本,以及是否启用了FLAG_CANCELED
常量。 - 在下一行,创建一个条件,检查
canceled
变量是否为 true。如果是,则调用cancelLastStroke
函数以删除最后一组MotionEvent
对象。如果不是,则调用currentPath.add
方法以添加最后一组MotionEvent
对象。
StylusViewModel.kt
import android.os.Build
...
class StylusViewModel : ViewModel() {
...
fun processMotionEvent(motionEvent: MotionEvent): Boolean {
...
MotionEvent.ACTION_POINTER_UP,
MotionEvent.ACTION_UP -> {
val canceled = Build.VERSION.SDK_INT >= Build.VERSION_CODES.TIRAMISU &&
(motionEvent.flags and MotionEvent.FLAG_CANCELED) == MotionEvent.FLAG_CANCELED
if(canceled) {
cancelLastStroke()
} else {
currentPath.add(DrawPoint(motionEvent.x, motionEvent.y, DrawPointType.LINE))
}
}
- 在
ACTION_CANCEL
常量中,请注意cancelLastStroke
函数
StylusViewModel.kt
...
class StylusViewModel : ViewModel() {
...
fun processMotionEvent(motionEvent: MotionEvent): Boolean {
...
MotionEvent.ACTION_CANCEL -> {
// unwanted touch detected
cancelLastStroke()
}
手掌拒绝已实现!您可以在 palm-rejection
文件夹中找到工作代码。
6. 实现低延迟
在本节中,您将减少用户输入和屏幕渲染之间的延迟,以提高性能。延迟有多种原因,其中之一是图形管道过长。您可以使用前缓冲渲染来减少图形管道。前缓冲渲染让开发人员可以直接访问屏幕缓冲区,这在手写和素描方面效果很好。
GLFrontBufferedRenderer
类由 androidx.graphics
库 提供,负责前缓冲和双缓冲渲染。它使用 onDrawFrontBufferedLayer
回调函数优化 SurfaceView
对象,以便快速渲染,并使用 onDrawDoubleBufferedLayer
回调函数进行正常渲染。GLFrontBufferedRenderer
类和 GLFrontBufferedRenderer.Callback
接口使用用户提供的数据类型。在本代码实验室中,您将使用 Segment
类。
要开始使用,请按照以下步骤操作
- 在 Android Studio 中,打开
low-latency
文件夹,以便获取所有必需的文件 - 请注意项目中的以下新文件
- 在
build.gradle
文件中,androidx.graphics
库 已使用implementation "androidx.graphics:graphics-core:1.0.0-alpha03"
声明导入。 LowLatencySurfaceView
类扩展了SurfaceView
类,以便在屏幕上渲染 OpenGL 代码。- The
LineRenderer
class holds OpenGL code to render a line on the screen. - The
FastRenderer
class allows fast rendering and implements theGLFrontBufferedRenderer.Callback
interface. It also interceptsMotionEvent
objects. - The
StylusViewModel
class holds the data points with aLineManager
interface. - The
Segment
class defines a segment as follows x1
,y1
: coordinates of the first pointx2
,y2
: coordinates of the second point
The following images shows how the data moves between each class
Create a low-latency surface and layout
- In the
MainActivity.kt
file, find theMainActivity
class'sonCreate
function. - In the body of the
onCreate
function, create aFastRenderer
object, and then pass in aviewModel
object
MainActivity.kt
class MainActivity : ComponentActivity() {
...
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
fastRendering = FastRenderer(viewModel)
lifecycleScope.launch {
...
- In the same file, create a
DrawAreaLowLatency
Composable
function. - In the function's body, use the
AndroidView
API to wrap theLowLatencySurfaceView
view and then provide thefastRendering
object
MainActivity.kt
import androidx.compose.ui.viewinterop.AndroidView
import com.example.stylus.gl.LowLatencySurfaceView
class MainActivity : ComponentActivity() {
...
@Composable
fun DrawAreaLowLatency(modifier: Modifier = Modifier) {
AndroidView(factory = { context ->
LowLatencySurfaceView(context, fastRenderer = fastRendering)
}, modifier = modifier)
}
- In the
onCreate
function after theDivider
Composable
function, add theDrawAreaLowLatency
Composable
function to the layout
MainActivity.kt
class MainActivity : ComponentActivity() {
...
override fun onCreate(savedInstanceState: Bundle?) {
...
Surface(
modifier = Modifier
.fillMaxSize(),
color = MaterialTheme.colorScheme.background
) {
Column {
StylusVisualization(
modifier = Modifier
.fillMaxWidth()
.height(100.dp)
)
Divider(
thickness = 1.dp,
color = Color.Black,
)
DrawAreaLowLatency()
}
}
- In the
gl
directory, open theLowLatencySurfaceView.kt
file, and then notice the following in theLowLatencySurfaceView
class
- The
LowLatencySurfaceView
class extends theSurfaceView
class. It uses thefastRenderer
object'sonTouchListener
method. - The
GLFrontBufferedRenderer.Callback
interface through thefastRenderer
class needs to be attached to theSurfaceView
object when theonAttachedToWindow
function is called so that the callbacks can render to theSurfaceView
view. - The
GLFrontBufferedRenderer.Callback
interface through thefastRenderer
class needs to be released when theonDetachedFromWindow
function is called.
LowLatencySurfaceView.kt
class LowLatencySurfaceView(context: Context, private val fastRenderer: FastRenderer) :
SurfaceView(context) {
init {
setOnTouchListener(fastRenderer.onTouchListener)
}
override fun onAttachedToWindow() {
super.onAttachedToWindow()
fastRenderer.attachSurfaceView(this)
}
override fun onDetachedFromWindow() {
fastRenderer.release()
super.onDetachedFromWindow()
}
}
Handle MotionEvent
objects with the onTouchListener
interface
To handle MotionEvent
objects when the ACTION_DOWN
constant is detected, follow these steps
- In the
gl
directory, open theFastRenderer.kt
file. - In the body of the
ACTION_DOWN
constant, create acurrentX
variable that stores theMotionEvent
object'sx
coordinate and acurrentY
variable that stores itsy
coordinate. - Create a
Segment
variable that stores aSegment
object that accepts two instances of thecurrentX
parameter and two instances of thecurrentY
parameter because it's the start of the line. - Call the
renderFrontBufferedLayer
method with asegment
parameter to trigger a callback on theonDrawFrontBufferedLayer
function.
FastRenderer.kt
class FastRenderer ( ... ) {
...
val onTouchListener = View.OnTouchListener { view, event ->
...
MotionEvent.ACTION_DOWN -> {
// Ask that the input system not batch MotionEvent objects,
// but instead deliver them as soon as they're available.
view.requestUnbufferedDispatch(event)
currentX = event.x
currentY = event.y
// Create a single point.
val segment = Segment(currentX, currentY, currentX, currentY)
frontBufferRenderer?.renderFrontBufferedLayer(segment)
}
To handle MotionEvent
objects when the ACTION_MOVE
constant is detected, follow these steps
- In the body of the
ACTION_MOVE
constant, create apreviousX
variable that stores thecurrentX
variable and apreviousY
variable that stores thecurrentY
variable. - Create a
currentX
variable that saves theMotionEvent
object's currentx
coordinate and acurrentY
variable that saves its currenty
coordinate. - Create a
Segment
variable that stores aSegment
object that accepts apreviousX
,previousY
,currentX
, andcurrentY
parameters. - Call the
renderFrontBufferedLayer
method with asegment
parameter to trigger a callback on theonDrawFrontBufferedLayer
function and execute OpenGL code.
FastRenderer.kt
class FastRenderer ( ... ) {
...
val onTouchListener = View.OnTouchListener { view, event ->
...
MotionEvent.ACTION_MOVE -> {
previousX = currentX
previousY = currentY
currentX = event.x
currentY = event.y
val segment = Segment(previousX, previousY, currentX, currentY)
// Send the short line to front buffered layer: fast rendering
frontBufferRenderer?.renderFrontBufferedLayer(segment)
}
- To handle
MotionEvent
objects when theACTION_UP
constant is detected, call thecommit
method to trigger a call on theonDrawDoubleBufferedLayer
function and execute OpenGL code
FastRenderer.kt
class FastRenderer ( ... ) {
...
val onTouchListener = View.OnTouchListener { view, event ->
...
MotionEvent.ACTION_UP -> {
frontBufferRenderer?.commit()
}
Implement the GLFrontBufferedRenderer
callback functions
In the FastRenderer.kt
file, the onDrawFrontBufferedLayer
and onDrawDoubleBufferedLayer
callback functions execute OpenGL code. At the beginning of each callback function, the following OpenGL functions map Android data to the OpenGL workspace
- The
GLES20.glViewport
function defines the size of the rectangle in which you render the scene. - The
Matrix.orthoM
function computes theModelViewProjection
matrix. - The
Matrix.multiplyMM
function performs matrix multiplication to transform the Android data to OpenGL reference, and provides the setup for theprojection
matrix.
FastRenderer.kt
class FastRenderer( ... ) {
...
override fun onDraw[Front/Double]BufferedLayer(
eglManager: EGLManager,
bufferInfo: BufferInfo,
transform: FloatArray,
params: Collection<Segment>
) {
val bufferWidth = bufferInfo.width
val bufferHeight = bufferInfo.height
GLES20.glViewport(0, 0, bufferWidth, bufferHeight)
// Map Android coordinates to OpenGL coordinates.
Matrix.orthoM(
mvpMatrix,
0,
0f,
bufferWidth.toFloat(),
0f,
bufferHeight.toFloat(),
-1f,
1f
)
Matrix.multiplyMM(projection, 0, mvpMatrix, 0, transform, 0)
With that part of the code set up for you, you can focus on the code that does the actual rendering. The onDrawFrontBufferedLayer
callback function renders a small area of the screen. It provides a param
value of Segment
type so that you can render a single segment fast. The LineRenderer
class is an openGL renderer for the brush that applies the color and size of the line.
To implement the onDrawFrontBufferedLayer
callback function, follow these steps
- In the
FastRenderer.kt
file, find theonDrawFrontBufferedLayer
callback function. - In the
onDrawFrontBufferedLayer
callback function's body, call theobtainRenderer
function to get theLineRenderer
instance. - Call the
LineRenderer
function'sdrawLine
method with the following parameters
- The
projection
matrix previously calculated. - A list of
Segment
objects, which is a single segment in this case. - The
color
of the line.
FastRenderer.kt
import android.graphics.Color
import androidx.core.graphics.toColor
class FastRenderer( ... ) {
...
override fun onDrawFrontBufferedLayer(
eglManager: EGLManager,
bufferInfo: BufferInfo,
transform: FloatArray,
params: Collection<Segment>
) {
...
Matrix.multiplyMM(projection, 0, mvpMatrix, 0, transform, 0)
obtainRenderer().drawLine(projection, listOf(param), Color.GRAY.toColor())
}
- Run the app, and then notice that you can draw on the screen with minimum latency. However, the app won't persist the line because you still need to implement the
onDrawDoubleBufferedLayer
callback function.
The onDrawDoubleBufferedLayer
callback function is called after the commit
function to allow persistence of the line. The callback provides params
values, which contain a collection of Segment
objects. All the segments on the front buffer are replayed in the double buffer for persistence.
To implement the onDrawDoubleBufferedLayer
callback function, follow these steps
- In the
StylusViewModel.kt
file, find theStylusViewModel
class, and then create anopenGlLines
variable that stores a mutable list ofSegment
objects
StylusViewModel.kt
import com.example.stylus.data.Segment
class StylusViewModel : ViewModel() {
private var _stylusState = MutableStateFlow(StylusState())
val stylusState: StateFlow<StylusState> = _stylusState
val openGlLines = mutableListOf<List<Segment>>()
private fun requestRendering(stylusState: StylusState) {
- In the
FastRenderer.kt
file, find theFastRenderer
class'sonDrawDoubleBufferedLayer
callback function. - In the body of the
onDrawDoubleBufferedLayer
callback function, clear the screen with theGLES20.glClearColor
andGLES20.glClear
methods so that the scene can be rendered from scratch, and add the lines to theviewModel
object to persist them
FastRenderer.kt
class FastRenderer( ... ) {
...
override fun onDrawDoubleBufferedLayer(
eglManager: EGLManager,
bufferInfo: BufferInfo,
transform: FloatArray,
params: Collection<Segment>
) {
...
// Clear the screen with black.
GLES20.glClearColor(0.0f, 0.0f, 0.0f, 1.0f)
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT)
viewModel.openGlLines.add(params.toList())
- Create a
for
loop that iterates through and renders each line from theviewModel
object
FastRenderer.kt
class FastRenderer( ... ) {
...
override fun onDrawDoubleBufferedLayer(
eglManager: EGLManager,
bufferInfo: BufferInfo,
transform: FloatArray,
params: Collection<Segment>
) {
...
// Clear the screen with black.
GLES20.glClearColor(0.0f, 0.0f, 0.0f, 1.0f)
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT)
viewModel.openGlLines.add(params.toList())
// Render the entire scene (all lines).
for (line in viewModel.openGlLines) {
obtainRenderer().drawLine(projection, line, Color.GRAY.toColor())
}
}
- Run the app, and then notice that you can draw on the screen, and the line is preserved after the
ACTION_UP
constant is triggered.
7. Implement motion prediction
You can further improve latency with the androidx.input
library, which analyzes the course of the stylus, and predicts the next point's location and inserts it for rendering.
To set up motion prediction, follow these steps
- In the
app/build.gradle
file, import the library in the dependencies section
app/build.gradle
...
dependencies {
...
implementation"androidx.input:input-motionprediction:1.0.0-beta01"
- Click File > Sync project with Gradle files.
- In the
FastRendering.kt
file'sFastRendering
class, declare themotionEventPredictor
object as an attribute
FastRenderer.kt
import androidx.input.motionprediction.MotionEventPredictor
class FastRenderer( ... ) {
...
private var frontBufferRenderer: GLFrontBufferedRenderer<Segment>? = null
private var motionEventPredictor: MotionEventPredictor? = null
- In the
attachSurfaceView
function, initialize themotionEventPredictor
variable
FastRenderer.kt
class FastRenderer( ... ) {
...
fun attachSurfaceView(surfaceView: SurfaceView) {
frontBufferRenderer = GLFrontBufferedRenderer(surfaceView, this)
motionEventPredictor = MotionEventPredictor.newInstance(surfaceView)
}
- In the
onTouchListener
variable, call themotionEventPredictor?.record
method so that themotionEventPredictor
object gets motion data
FastRendering.kt
class FastRenderer( ... ) {
...
val onTouchListener = View.OnTouchListener { view, event ->
motionEventPredictor?.record(event)
...
when (event?.action) {
The next step is to predict a MotionEvent
object with the predict
function. We recommend predicting when an ACTION_MOVE
constant is received and after the MotionEvent
object is recorded. In other words, you should predict when a stroke is underway.
- Predict an artificial
MotionEvent
object with thepredict
method. - Create a
Segment
object that uses the current and predicted x and y coordinates. - Request fast rendering of the predicted segment with the
frontBufferRenderer?.renderFrontBufferedLayer(predictedSegment)
method.
FastRendering.kt
class FastRenderer( ... ) {
...
val onTouchListener = View.OnTouchListener { view, event ->
motionEventPredictor?.record(event)
...
when (event?.action) {
...
MotionEvent.ACTION_MOVE -> {
...
frontBufferRenderer?.renderFrontBufferedLayer(segment)
val motionEventPredicted = motionEventPredictor?.predict()
if(motionEventPredicted != null) {
val predictedSegment = Segment(currentX, currentY,
motionEventPredicted.x, motionEventPredicted.y)
frontBufferRenderer?.renderFrontBufferedLayer(predictedSegment)
}
}
...
}
Predicted events are inserted to render, which improves latency.
- Run the app, and then notice the improved latency.
Improving latency will give stylus users a more natural stylus experience.
8. Congratulations
Congratulations! You know how to handle stylus like a pro!
You learned how to process MotionEvent
objects to extract the information about pressure, orientation and tilt. You also learned how to improve the latency by implementing both androidx.graphics
library and androidx.input
library. These enhancements implemented together, offer a more organic stylus experience.