//
archives

Tech

This category contains 66 posts

[UI Basics] Accelerometer, gyroscope, and compass

In a smart phone, there are three sources of sensory information that seem all telling us something about orientation: accelerometer, gyroscope, and compass. But what’s the difference?

In terms of what information they give,

  • accelerometer: if you calculate the resultant force of the phone (using Newton’s second law F = ma), then minus the gravity (mg), and calculate an acceleration value ((F – mg) / m): that is the value obtained from the accelerometer (and usually it is divided by the three axes thus has three component values);
  • gyroscope: while in above we assume linear acceleration, angular acceleration can be also sensed by using a gyroscoope. The best way to think of this is: gyroscope tells how a device yaw, pitch and roll;
  • compass tells the phone’s orientation relative to that derived from the earth’s magnetic field.

In terms of how they work,

  • accelerometer: there are so many ways to make an accelerometer. For example, [4] introduces one using a crystal structure that is sensitive to acclerative forces, and as a result will causes a voltage to be generated. Basically, accelerometers are pretty much self-contained units that provide ‘primitive’ sensory information pertinent to a device;
  • gyroscope: the kind of gyroscope used in a device is usually the MEMS (MicroElectroMechanical System)
    gyroscope which uses the Coriolis effect to measure angular accelerations [2]. It is also a self-contained unit;
  • compass can be made of a magnetometer that determines the orientation influenced by the earth’s magnetic field [5]. Alternatively, one can also collaborate with the accelerometer to calculate the compass information such as the AK8973 chip used on the iPhones [3].

I believe knowing the difference between these three popular phone sensors can help us build better mobile applications.

References

[1] http://www.devx.com/wireless/Article/44799

[2] http://www.electroiq.com/articles/stm/2010/11/introduction-to-mems-gyroscopes.html

[3] http://mobiledeviceinsight.com/2011/12/sensors-in-smartphones/

[4] http://www.dimensionengineering.com/info/accelerometers

[5] http://spectrum.ieee.org/semiconductors/devices/a-compass-in-every-smartphone

Advertisements

Notes of [Parallel Programming Primer] by Haungs & Keen

1. So! What’s Moore’s Law?

The number of transistors that can be inexpensively integrated in one circuit will double approximately every 18 months.

2. So! What are the main challenges for parallel computing?

How to come up with corresponding programming platforms, algorithm, application design that effectively use the hardware architecture.

3. So! How’s communication happening in parallel computing?

There are two ways to enable communication: message passing and shared address space? (Message passing invokes explicit protocols to communicate with each other while shared address space implicitly communicate via a common piece of memory).

4. So! What is parallel algorithm? Give some examples.

A parallel algorithm cuts itself into pieces, each of which is consumed by individual processing devices before these pieces are eventually put back together to obtain the final results.

Examples: ray tracing (data parallel), parallel quicksort (task parallel), pipeline (graphics pipeline), web server (work pool), etc.

5. So! What are the pitfalls of parallel computing?

Synchronization (is there a lock?), efficiency (is the parallelism maximally exploited) and reliability (is the result correct? If not, can we debug it?).

Notes of [MapReduce: simplified data …] by Dean & Ghemawat

1. So! What is MapReduce?

MapReduce is a two-step mechanism for manipulating distributed data with large scale. In particular, the ‘map’ step visits the data according to programmer-defined rules, then the ‘reduce’ step collects the intermediate results from ‘map’ and process them to produce the final result.

2. So! Why do we need MapReduce?

Because the data Google handles is of large scale and distributed across machines. Hence the conventional way of loading all the data necessary into the memory before the processing can start simply does not work.

3. So! Give me an example of how MapReduce work.

Say you are counting the number of a word in millions of web pages. The ‘map’ would go through these pages and fire a signal whenever it finds such words. Then the ‘reduce’ would collect lists of such signals and count them as a numeric value.

Notes of [Bigtable: a distributed… ] by Chang et al.

1. So! What is Bigtable?

Bigtable is similar to the table concept in database but it is deliberately designed for managing large-scaled, structured data across distributed storage systems.

2. So! How is it ‘deliberate’?

The big table is a multi-dimensional map index by a row key, a column key and a time stamp. The value is an uninterpreted array of bytes.

3. So! Give me an example of a Bigtable.

Consider web pages. The domain name might serve as the row key, the content of the html file might be indexed by column keys and further recorded with time stamps e.g., two weeks ago, yesterday, etc.

4. So! What’s the biggest difference between a Bigtable and a normal database table?

Normal database tables provide a relational model; but Bigtable only provides a simple data model that supports dynamic control over data layout and format, and allows clients to reason about the locality properties of the data represented in the underlying storage.

5. So! What Google products use Bigtables?

Google Earth, Google Analytics, Google Finance, web indexing in Google Search, etc.

Notes of [The Google file system] by Ghemawat et al.

1. What is Google File System (GFS)?

Google File System is a scalable distributed file system for large distributed data-intensive applications.

(The Google File System demonstrates the qualities essential for supporting large-scale data processing workloads on commodity hardware)

2. What are the key features of GFS?

  1. Component failures are the norm rather than the exception;
  2. Files are huge by traditional standards;
  3. Most files are mutated by appending new data rather than overwriting existing data;
  4. Co-designing the applications and the file system API benefits the overall system by increasing our flexibility.

3. What is clustered storage?

When a file system uses clustered storage, it is simultaneously mounted on multiple servers.

Four Steps to Start Using Camera in Your Android Application

As a hello world of Android camera, our goal is, with a minimum of code, to show the camera image on the screen (similar to the camera app). After creating a normal Android project:

1. Add user permission to the manifest xml file

This line is added as a node of <manifest>:

<uses-permission android:name="android.permission.CAMERA"/>

2. Create and initialize a Camera object


private Camera mCamera;

// ...

mCamera = Camera.open();

3. Add a private CameraView class extending SurfaceView to display images captured from the camera

Detailed code see below. Whenever you want non-static view (animation, camera, etc.) you might need to extend the SurfaceView.

4. Create and initialize a CameraView object and setContentView it


private CameraView mView;

// ...

mView = new CameraView(this);
setContentView(mView);

Sample Code:


package me.xiangchen.hellostuff;

import java.io.IOException;

import android.app.Activity;
import android.content.Context;
import android.graphics.Canvas;
import android.hardware.Camera;
import android.os.Bundle;
import android.view.SurfaceHolder;
import android.view.SurfaceView;

public class HelloCameraActivity extends Activity {

private Camera mCamera;
 private CameraView mView;

 @Override
 public void onCreate(Bundle savedInstanceState) {
 super.onCreate(savedInstanceState);

 mCamera = Camera.open();
 mView = new CameraView(this);

 setContentView(mView);
 }

 // extending SurfaceView to render the camera images
 private class CameraView extends SurfaceView implements SurfaceHolder.Callback{
 private SurfaceHolder mHolder;

 public CameraView(Context context) {
 super(context);

 mHolder = this.getHolder();
 mHolder.addCallback(this);
 mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);

 setFocusable(true);

 }

 @Override
 public void surfaceChanged(SurfaceHolder holder, int format, int width,
 int height) {
 }
 @Override
 public void surfaceCreated(SurfaceHolder holder) {

 try {
 mCamera.setPreviewDisplay(mHolder);
 } catch (IOException e) {
 mCamera.release();
 }
 mCamera.startPreview();
 }

@Override
 public void surfaceDestroyed(SurfaceHolder holder) {

 mCamera.stopPreview();
 mCamera.release();

 }

 }

}

An Android Accelerometer Example

To start programming with Android’s sensors is not difficult at all. This post shows an framework (particularly using accelerometer) that’s less than 30 lines of code.


public class SensorActivity extends Activity, implements SensorEventListener {
private final SensorManager mSensorManager;
private final Sensor mAccelerometer;

public SensorActivity() {
mSensorManager = (SensorManager)getSystemService(SENSOR_SERVICE);
mAccelerometer = mSensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER);
}

protected void onResume() {
super.onResume();
mSensorManager.registerListener(this, mAccelerometer, SensorManager.SENSOR_DELAY_NORMAL);
}

protected void onPause() {
super.onPause();
mSensorManager.unregisterListener(this);
}

public void onAccuracyChanged(Sensor sensor, int accuracy) {
}

public void onSensorChanged(SensorEvent event) {
}
}

The following program illustrates acclerometer in a more concrete example. It simulates the physical effects of a ball rolling in a box whose bottom plane is tilted by the user.

Bouncing Ball using Android's accelerometer


package me.xiangchen.apps;

import java.util.Timer;
import android.app.Activity;
import android.content.Context;
import android.content.pm.ActivityInfo;
import android.graphics.Canvas;
import android.graphics.Color;
import android.graphics.Paint;
import android.graphics.RectF;
import android.hardware.Sensor;
import android.hardware.SensorEvent;
import android.hardware.SensorEventListener;
import android.hardware.SensorManager;
import android.os.Bundle;
import android.os.Handler;
import android.os.SystemClock;
import android.view.Display;
import android.view.SurfaceHolder;
import android.view.SurfaceView;
import android.view.WindowManager;

public class BouncingBallActivity extends Activity implements SensorEventListener{

 // sensor-related
 private SensorManager mSensorManager;
 private Sensor mAccelerometer;

 // animated view
 private ShapeView mShapeView;

 // screen size
 private int mWidthScreen;
 private int mHeightScreen;

 // motion parameters
 private final float FACTOR_FRICTION = 0.5f; // imaginary friction on the screen
 private final float GRAVITY = 9.8f; // acceleration of gravity
 private float mAx; // acceleration along x axis
 private float mAy; // acceleration along y axis
 private final float mDeltaT = 0.5f; // imaginary time interval between each acceleration updates

 // timer
 private Timer mTimer;
 private Handler mHandler;
 private boolean isTimerStarted = false;
 private long mStart;

 @Override
 public void onCreate(Bundle savedInstanceState) {
 super.onCreate(savedInstanceState);

 // set the screen always portait
 setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_PORTRAIT);

 // initializing sensors
 mSensorManager = (SensorManager)getSystemService(SENSOR_SERVICE);
 mAccelerometer = mSensorManager.getDefaultSensor(Sensor.TYPE_ACCELEROMETER);

 // obtain screen width and height
 Display display = ((WindowManager)this.getSystemService(Context.WINDOW_SERVICE)).getDefaultDisplay();
 mWidthScreen = display.getWidth();
 mHeightScreen = display.getHeight();

 // initializing the view that renders the ball
 mShapeView = new ShapeView(this);
 mShapeView.setOvalCenter((int)(mWidthScreen * 0.6), (int)(mHeightScreen * 0.6));

 setContentView(mShapeView);
 }

@Override
 public void onAccuracyChanged(Sensor sensor, int accuracy) {

 }

@Override
 public void onSensorChanged(SensorEvent event) {
 // obtain the three accelerations from sensors
 mAx = event.values[0];
 mAy = event.values[1];

 float mAz = event.values[2];

 // taking into account the frictions
 mAx = Math.signum(mAx) * Math.abs(mAx) * (1 - FACTOR_FRICTION * Math.abs(mAz) / GRAVITY);
 mAy = Math.signum(mAy) * Math.abs(mAy) * (1 - FACTOR_FRICTION * Math.abs(mAz) / GRAVITY);
 }

 @Override
 protected void onResume() {
 super.onResume();
 // start sensor sensing
 mSensorManager.registerListener(this, mAccelerometer, SensorManager.SENSOR_DELAY_NORMAL);
 }

@Override
 protected void onPause() {
 super.onPause();
 // stop senser sensing
 mSensorManager.unregisterListener(this);
 }

 // the view that renders the ball
 private class ShapeView extends SurfaceView implements SurfaceHolder.Callback{

private final int RADIUS = 50;
 private final float FACTOR_BOUNCEBACK = 0.75f;

 private int mXCenter;
 private int mYCenter;
 private RectF mRectF;
 private final Paint mPaint;
 private ShapeThread mThread;

 private float mVx;
 private float mVy;

 public ShapeView(Context context) {
 super(context);

 getHolder().addCallback(this);
 mThread = new ShapeThread(getHolder(), this);
 setFocusable(true);

 mPaint = new Paint();
 mPaint.setColor(0xFFFFFFFF);
 mPaint.setAlpha(192);
 mPaint.setStyle(Paint.Style.FILL);
 mPaint.setAntiAlias(true);

 mRectF = new RectF();
 }

// set the position of the ball
 public boolean setOvalCenter(int x, int y)
 {
 mXCenter = x;
 mYCenter = y;
 return true;
 }

 // calculate and update the ball's position
 public boolean updateOvalCenter()
 {
 mVx -= mAx * mDeltaT;
 mVy += mAy * mDeltaT;

 mXCenter += (int)(mDeltaT * (mVx + 0.5 * mAx * mDeltaT));
 mYCenter += (int)(mDeltaT * (mVy + 0.5 * mAy * mDeltaT));

 if(mXCenter < RADIUS)
 {
 mXCenter = RADIUS;
 mVx = -mVx * FACTOR_BOUNCEBACK;
 }

 if(mYCenter < RADIUS)  {  mYCenter = RADIUS;  mVy = -mVy * FACTOR_BOUNCEBACK;  }  if(mXCenter > mWidthScreen - RADIUS)
 {
 mXCenter = mWidthScreen - RADIUS;
 mVx = -mVx * FACTOR_BOUNCEBACK;
 }

 if(mYCenter > mHeightScreen - 2 * RADIUS)
 {
 mYCenter = mHeightScreen - 2 * RADIUS;
 mVy = -mVy * FACTOR_BOUNCEBACK;
 }

 return true;
 }

 // update the canvas
 protected void onDraw(Canvas canvas)
 {
 if(mRectF != null)
 {
 mRectF.set(mXCenter - RADIUS, mYCenter - RADIUS, mXCenter + RADIUS, mYCenter + RADIUS);
 canvas.drawColor(0XFF000000);
 canvas.drawOval(mRectF, mPaint);
 }
 }

@Override
 public void surfaceChanged(SurfaceHolder holder, int format, int width,
 int height) {
 }

@Override
 public void surfaceCreated(SurfaceHolder holder) {
 mThread.setRunning(true);
 mThread.start();
 }

@Override
 public void surfaceDestroyed(SurfaceHolder holder) {
 boolean retry = true;
 mThread.setRunning(false);
 while(retry)
 {
 try{
 mThread.join();
 retry = false;
 } catch (InterruptedException e){

 }
 }
 }
 }

 class ShapeThread extends Thread {
 private SurfaceHolder mSurfaceHolder;
 private ShapeView mShapeView;
 private boolean mRun = false;

 public ShapeThread(SurfaceHolder surfaceHolder, ShapeView shapeView) {
 mSurfaceHolder = surfaceHolder;
 mShapeView = shapeView;
 }

 public void setRunning(boolean run) {
 mRun = run;
 }

 public SurfaceHolder getSurfaceHolder() {
 return mSurfaceHolder;
 }

 @Override
 public void run() {
 Canvas c;
 while (mRun) {
 mShapeView.updateOvalCenter();
 c = null;
 try {
 c = mSurfaceHolder.lockCanvas(null);
 synchronized (mSurfaceHolder) {
 mShapeView.onDraw(c);
 }
 } finally {
 if (c != null) {
 mSurfaceHolder.unlockCanvasAndPost(c);
 }
 }
 }
 }
 }
}

Extending Android View to Create Clickable Drawables

Adroid API has a poor design of Drawable, not allowing them to establish a parent-children relationship with the app’s view, but requiring the view to redraw them on-the-fly where those drawn Drawables are just their dead non-interactive portraits.

To make drawable interactive, or more specifically, clickable, I choose to bypass the class Drawable; instead I extend View (which is clickable) and then draw the Drawables in it. The view-extending class provides overridable onTouchEvent wherein I can determine the spatial relationship between the touch and the drawn Drawables (though, in a mathematical way) so as to make them (appear) clickable.

Below is an simple example. To use this code, you need to setContentView it in your onCreate method where it will show an oval that changes its color when clicked. To make it simple, the code does not take care of everything and might contain glitches. You fix them.


package me.xiangchen.basicstuff;

import java.util.ArrayList;
import java.util.Hashtable;

import android.content.Context;
import android.graphics.Canvas;
import android.graphics.Paint;
import android.graphics.Path;
import android.graphics.RectF;
import android.view.Display;
import android.view.MotionEvent;
import android.view.View;
import android.view.WindowManager;

public class InteractiveDrawable extends View {

private final int radius = 100;

 private Canvas mCanvas;
 private int xTouch;
 private int yTouch;
 private Paint mPaint;

 private Hashtable mHashTable;

 public InteractiveDrawable(Context context, Paint paint) {
 super(context);
 mPaint = new Paint();
 mPaint.setColor(paint.getColor());
 mPaint.setAlpha(160);
 mPaint.setDither(true);
 mPaint.setAntiAlias(true);
 mPaint.setStyle(Paint.Style.FILL);
 mPaint.setStrokeJoin(Paint.Join.ROUND);
 mPaint.setStrokeCap(Paint.Cap.ROUND);
 mPaint.setStrokeWidth(3);
 }

 private boolean isInShape(int x, int y)
 {
 boolean result = false;
 if(Math.sqrt((x - xTouch) * (x - xTouch) + (y - yTouch) * (y - yTouch)) < radius)
 {
 result = true;
 }
 return result;
 }

 @Override
 public boolean onTouchEvent(MotionEvent event)
 {
 int action = event.getAction();
 if(action == MotionEvent.ACTION_DOWN)
 {
 if(isInShape((int)event.getX(), (int)event.getY()))
 {
 int tmpR = (int)(Math.random() * 255);
 int tmpG = (int)(Math.random() * 255);
 int tmpB = (int)(Math.random() * 255);
 mPaint.setARGB(160, tmpR, tmpG, tmpB);
 this.invalidate();
 }
 }
 return true;
 }

@Override
 // decides the size of this view
 protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {

 Display display = ((WindowManager)this.getContext().getSystemService(Context.WINDOW_SERVICE)).getDefaultDisplay();
 // the view occupies the entire screen. however, this is not the only option
 setMeasuredDimension(display.getWidth(), display.getHeight());
 }

 @Override
 protected void onDraw(Canvas canvas) {
 RectF tmpRectF = new RectF();
 tmpRectF.set(xTouch - radius, yTouch - radius, xTouch + radius, yTouch + radius);
 canvas.drawOval(tmpRectF, mPaint);
 }
}

Tweaking the Performance of An Android Drawing App

This post presents a simple drawing app for Android. However, there is a glitch. As it collects each single small paths while the finger is stroking on the canvas,

 @Override
 public boolean onTouchEvent(MotionEvent event) {
 synchronized (mThread.getSurfaceHolder()) {
 if(event.getAction() == MotionEvent.ACTION_DOWN){
 path = new Path();
 // update the starting point of the new path
 path.moveTo(event.getX(), event.getY());
 path.lineTo(event.getX(), event.getY());
 }
 else if(event.getAction() == MotionEvent.ACTION_MOVE){
 // draw to the new point
 path.lineTo(event.getX(), event.getY());
 }
 else if(event.getAction() == MotionEvent.ACTION_UP){
 // last drawing
 path.lineTo(event.getX(), event.getY());
 }

 // collect every time the method is called!!!!
 mGraphics.add(path);

 return true;
 }

the ArrayList<Path> grows quickly and the redraw loop takes longer and longer.

for (Path path : mGraphics) {
 canvas.drawPath(path, mPaint);
 }

To  solve this, we simply make two changes:

1) Update mGraphics at the very end of a stroke;

 @Override
 public boolean onTouchEvent(MotionEvent event) {
 synchronized (mThread.getSurfaceHolder()) {
 if(event.getAction() == MotionEvent.ACTION_DOWN){
   path = new Path();
   // update the starting point of the new path
   path.moveTo(event.getX(), event.getY());
   path.lineTo(event.getX(), event.getY());
 }
 else if(event.getAction() == MotionEvent.ACTION_MOVE){
   // draw to the new point
   path.lineTo(event.getX(), event.getY());
 }
 else if(event.getAction() == MotionEvent.ACTION_UP){
   // last drawing
   path.lineTo(event.getX(), event.getY());
  // not updating until now
  mGraphics.add(path);
 }

 // the line below has been moved
 // mGraphics.add(path);

 return true;
 }

2) Draw the changing path while stroking, as shown below

public void onDraw(Canvas canvas) {
// draw those updated and stored
  for (Path path : mGraphics)
  {
      canvas.drawPath(path, mPaint);
  }

// draw those that are not
 if(isStroking)
 {
     canvas.drawPath(path, paint);
 }
}

In this way, we avoid the excessive growth of the array of paths and streamline the redraw process.

Writing A Drawing App for Android

Thanks to these two posts:

http://www.tutorialforandroid.com/2009/06/drawing-with-canvas-in-android.html

http://www.droidnova.com/playing-with-graphics-in-android-part-iv,182.html

I just copied and pasted their code, fixed some glitches, and had my first drawing app on Android.

The basic idea is to maintain a SurfaceView wherein you can update drawing paths from touches, and draw them to its canvas.

Most of the code below comes from that two posts:


package me.xiangchen.basicstuff;

import java.util.ArrayList;
import android.app.Activity;
import android.content.Context;
import android.graphics.Bitmap;
import android.graphics.Canvas;
import android.graphics.Paint;
import android.graphics.Path;
import android.os.Bundle;
import android.view.MotionEvent;
import android.view.SurfaceHolder;
import android.view.SurfaceView;

public class BasicSketchActivity extends Activity {

// Paint defines the drawn strokes' properties
 private Paint mPaint;
 /** Called when the activity is first created. */
 @Override
 public void onCreate(Bundle savedInstanceState) {
 super.onCreate(savedInstanceState);
 setContentView(new SketchView(this));

 mPaint = new Paint();
 mPaint.setDither(true);
 mPaint.setColor(0xFFFFFFFF);
 mPaint.setStyle(Paint.Style.STROKE);
 mPaint.setStrokeJoin(Paint.Join.ROUND);
 mPaint.setStrokeCap(Paint.Cap.ROUND);
 mPaint.setStrokeWidth(3);
 }

 private class SketchView extends SurfaceView implements SurfaceHolder.Callback {

private SketchThread mThread;
 private Path path;
 private ArrayList<Path> mGraphics = new ArrayList<Path>();

 public SketchView (Context context){
 super(context);

 //access to the underlying surface
 getHolder().addCallback(this);
 mThread = new SketchThread(getHolder(), this);
 setFocusable(true);
 }

 @Override
 public void surfaceChanged(SurfaceHolder holder, int format, int width,
 int height) {
 // TODO Auto-generated method stub

 }

@Override
 public void surfaceCreated(SurfaceHolder holder) {
 // TODO Auto-generated method stub
 mThread.setRunning(true);
 mThread.start();
 }

@Override
 public void surfaceDestroyed(SurfaceHolder holder) {
 // TODO Auto-generated method stub
 boolean retry = true;
 mThread.setRunning(false);
 while(retry)
 {
 try{
 mThread.join();
 retry = false;
 } catch (InterruptedException e){

 }
 }
 }

 @Override
 public void onDraw(Canvas canvas) {
 // take the paint to draw all the (small) paths in the array
 for (Path path : mGraphics) {
 canvas.drawPath(path, mPaint);
 }
 }

 @Override
 public boolean onTouchEvent(MotionEvent event) {
 synchronized (mThread.getSurfaceHolder()) {
 if(event.getAction() == MotionEvent.ACTION_DOWN){
 path = new Path();
 // update the starting point of the new path
 path.moveTo(event.getX(), event.getY());
 path.lineTo(event.getX(), event.getY());
 }
 else if(event.getAction() == MotionEvent.ACTION_MOVE){
 // draw to the new point
 path.lineTo(event.getX(), event.getY());
 }
 else if(event.getAction() == MotionEvent.ACTION_UP){
 // last drawing
 path.lineTo(event.getX(), event.getY());
 }
 mGraphics.add(path);
 return true;
 }
 }

 }

 class SketchThread extends Thread {
 private SurfaceHolder mSurfaceHolder;
 private SketchView mSketchView;
 private boolean mRun = false;

 public SketchThread(SurfaceHolder surfaceHolder, SketchView sketchView) {
 mSurfaceHolder = surfaceHolder;
 mSketchView = sketchView;
 }

 public void setRunning(boolean run) {
 mRun = run;
 }

 public SurfaceHolder getSurfaceHolder() {
 return mSurfaceHolder;
 }

 @Override
 public void run() {
 Canvas c;
 while (mRun) {
 c = null;
 try {
 // obtain the canvas to draw
 c = mSurfaceHolder.lockCanvas(null);
 synchronized (mSurfaceHolder) {
 mSketchView.onDraw(c);
 }
 } finally {
 // do this in a finally so that if an exception is thrown
 // during the above, we don't leave the Surface in an
 // inconsistent state
 if (c != null) {
 // post the drawn canvas
 mSurfaceHolder.unlockCanvasAndPost(c);
 }
 }
 }
 }
 }

}

Twitter Updates

Error: Twitter did not respond. Please wait a few minutes and refresh this page.