Lizard & Dog Blog

Implementing Camera 2 for Android – Java – Part 1

Foreword:

Are you struggling to implement Camera2 for Android? Working with camera functionality in Android can be a complex task, especially if you’re new to it. Fear not, in this article, we’ll provide you with a step-by-step guide to help you set up Camera2 in your Android application. By following our instructions, you’ll be able to harness the power of the Camera 2 API and start building camera-based features for your app. So, let’s dive in and unlock the potential of Camera2.

Why choose Camera 2 over Camera X:

Think of Camera2 API as a super-advanced camera that lets you control everything from the exposure time to the ISO and white balance. It’s like having a professional DSLR camera in your phone! But, because it’s so advanced, it can be a bit complicated to use and might require some extra effort and knowledge.

On the other hand, CameraX is like a cute little point-and-shoot camera that’s easy to use and provides a consistent experience across different Android devices. It’s great for everyday use and doesn’t require a lot of technical knowledge.

But here’s the catch – if you want to capture high-speed video or burst shots, or if you want to do some really fancy image processing, then you might need to use the advanced features of Camera2 API. It’s like needing to use a professional camera to capture a really cool action shot or a beautiful landscape.

So, it’s like choosing between a simple and easy-to-use camera or a professional and advanced camera depending on what you need to capture.

As we always choose the most difficult options, we will obviously be going with Camera 2.

Goal of this tutorial:

The first part of this tutorial will focus on capturing the feed of the Camera and displaying it to a custom TextureView (AutoFitTextureView) on which you will be able to zoom in and out.

In later parts of this tutorial we will focus on capturing the image and discuss potential image processing techniques you could apply to it.

Setup

In this tutorial, we will create an Android Activity implementing the Camera 2 API as well as one custom View (AutoFitTextureView) .

To begin, create a new Android Studio Project and choose an empty Activity.

Add the following permission to your Manifest:

<uses-permission android:name="android.permission.CAMERA" />

Let’s move on to the code, starting with the View that will display the Camera feed.

Code

1. AutoFitTextureView:

Introducing the AutoFitTextureView for your camera display needs!

The AutoFitTextureView is an Android custom view class that displays the camera preview in an aspect ratio matching the view’s dimensions, with pinch-to-zoom functionality.

It maintains aspect ratio with the setAspectRatio() method that takes two integers representing width and height of the aspect ratio.

The getZoomCaracteristics interface provides information about the camera’s zoom capabilities such as the current zoom region and maximum zoom level.

The zoom functionality uses the ScaleGestureDetector class to zoom in and out by pinching on the screen.

public class AutoFitTextureView extends TextureView {
    // Ratio of width to height of the view
    private int mRatioWidth = 0;
    private int mRatioHeight = 0;

    // Interface to get zoom characteristics from the camera
    public getZoomCaracteristics getZoomCaracteristics;

    // Boolean to keep track if it's the first time measuring
    boolean firstMeasure;

    // Rectangles to hold camera zoom area
    Rect cameraRecteub;
    public Rect cameraRecteub1;

    // Float to hold maximum zoom level
    float maxZoomTeub;

    // Integer to keep track of current zoom level
    int zoom_level;

    // Scale gesture detector to detect pinch zoom gestures
    private ScaleGestureDetector mScaleDetector;

    // Float to hold the current scale factor
    private float mScaleFactor = 3.f;

    // Boolean to keep track if it's the first time getting the max zoom level
    private boolean isfirstmaxzoomteub;

    // Constructor with single argument
    public AutoFitTextureView(Context context) {
        this(context, null);

        // Initialize some variables
        this.getZoomCaracteristics = null;
        this.firstMeasure=true;
        this.zoom_level=0;
        this.isfirstmaxzoomteub=true;

        // Create a new scale gesture detector
        mScaleDetector = new ScaleGestureDetector(context, new ScaleListener());
    }

    public AutoFitTextureView(Context context, AttributeSet attrs) {
        this(context, attrs, 0);
        this.getZoomCaracteristics = null;
        this.firstMeasure=true;
        this.zoom_level=0;
        this.isfirstmaxzoomteub=true;
        mScaleDetector = new ScaleGestureDetector(context, new ScaleListener());
    }

    public AutoFitTextureView(Context context, AttributeSet attrs, int defStyle) {
        super(context, attrs, defStyle);
        this.getZoomCaracteristics = null;
        this.firstMeasure=true;
        this.zoom_level=0;
        this.isfirstmaxzoomteub=true;
        mScaleDetector = new ScaleGestureDetector(context, new ScaleListener());
    }


    // Interface to get zoom characteristics from the camera
    public interface getZoomCaracteristics{

        // Method to get the zoom rectangle from the camera
        Rect giveRectZoom() throws CameraAccessException;

        // Method to get the maximum zoom level from the camera
        float giveMaxZoom() throws CameraAccessException;

        // Method to set the zoom level for preview
        void previewRequestINT(Rect rect);

        // Method to create a capture session with the current zoom level
        void captureSession() throws CameraAccessException;

    }

    // Method to set the interface to get zoom characteristics from the camera
    public void setGetZoomCaracteristics(getZoomCaracteristics getZoomCaracteristics){
        this.getZoomCaracteristics=getZoomCaracteristics;
    }

    // Method to set the aspect ratio of the view
    public void setAspectRatio(int width, int height) {
        // Check if width and height are non-negative values, if not, throw an exception with a message
        if (width < 0 || height < 0) {
            throw new IllegalArgumentException("Size cannot be negative.");
        }
        // Assign the width and height values to the class variables
        mRatioWidth = width;
        mRatioHeight = height;
        // Request a layout to update the view based on the new aspect ratio
        requestLayout();
    }


    @Override
    protected void onMeasure(int widthMeasureSpec, int heightMeasureSpec) {
        super.onMeasure(widthMeasureSpec, heightMeasureSpec);
        // Gets the measured width and height values from the passed in widthMeasureSpec and heightMeasureSpec
        int width = MeasureSpec.getSize(widthMeasureSpec);
        int height = MeasureSpec.getSize(heightMeasureSpec);
        if (0 == mRatioWidth || 0 == mRatioHeight) {
            // If either mRatioWidth or mRatioHeight is 0, then set the dimensions to the passed in values
            setMeasuredDimension(width, height);
        } else {
            // Calculate the ratio of width and height to mRatioWidth and mRatioHeight and set the dimension accordingly
            if (width < height * mRatioWidth / mRatioHeight) {
                setMeasuredDimension(width, width * mRatioHeight / mRatioWidth);
            } else {
                setMeasuredDimension(height * mRatioWidth / mRatioHeight, height);
            }
        }
        // Increments the counter variable

    }

    @Override
    protected void onLayout(boolean changed, int left, int top, int right, int bottom) {
        super.onLayout(changed, left, top, right, bottom);

    }

    @Override
    public boolean onTouchEvent(MotionEvent event) {
        // Pass touch event to scale gesture detector
        mScaleDetector.onTouchEvent(event);
        return true;
    }


    private class ScaleListener extends ScaleGestureDetector.SimpleOnScaleGestureListener {
        @Override
        public boolean onScale(ScaleGestureDetector detector) {
            // Get maximum zoom and current zoom rectangle if they have not been retrieved yet
            if (getZoomCaracteristics!=null && isfirstmaxzoomteub){
                try {
                    maxZoomTeub=  Math.min(3f,getZoomCaracteristics.giveMaxZoom());
                    isfirstmaxzoomteub=false;
                } catch (CameraAccessException e) {
                    e.printStackTrace();
                }
                try {
                    cameraRecteub=getZoomCaracteristics.giveRectZoom();
                    cameraRecteub1=getZoomCaracteristics.giveRectZoom();

                } catch (CameraAccessException e) {
                    e.printStackTrace();
                }
            }
// Get the scale factor from the gesture detector
            mScaleFactor = detector.getScaleFactor(); // Compute the eventual width after scaling
            float eventualWidth=cameraRecteub.width()/mScaleFactor;
            if (mScaleFactor>1){
                if (eventualWidth<cameraRecteub1.width()/3.0){ // If the eventual width is less than one third of the maximum width

                    cameraRecteub.set((int) ( 0.5f*(2*cameraRecteub1.width()/3.0)),
                            (int) ( 0.5f*(2*cameraRecteub1.height()/3)),
                            (int) ( 0.5f*(4*cameraRecteub1.width()/3)),
                            (int) ( 0.5f*(4*cameraRecteub1.height()/3)));

                }
                else{ // If the eventual width is greater than or equal to one third of the maximum width
                    cameraRecteub.set((int) ( 0.5f*(cameraRecteub1.width()-cameraRecteub.width()/mScaleFactor)),
                            (int) ( 0.5f*(cameraRecteub1.height()-cameraRecteub.height()/mScaleFactor)),
                            (int) ( 0.5f*(cameraRecteub1.width()+cameraRecteub.width()/mScaleFactor)),
                            (int) ( 0.5f*(cameraRecteub1.height()+cameraRecteub.height()/mScaleFactor)));
                }

            }
            else{ // If scaling down

                if (eventualWidth>cameraRecteub1.width()){
                    cameraRecteub.set(cameraRecteub1);
                }
                else{  // If the eventual width is less than or equal to the maximum width
                    cameraRecteub.set((int) ( 0.5f*(cameraRecteub1.width()-cameraRecteub.width()/mScaleFactor)),
                            (int) ( 0.5f*(cameraRecteub1.height()-cameraRecteub.height()/mScaleFactor)),
                            (int) ( 0.5f*(cameraRecteub1.width()+cameraRecteub.width()/mScaleFactor)),
                            (int) (( 0.5f*(cameraRecteub1.height()+cameraRecteub.height()/mScaleFactor))));
                }}

            if(getZoomCaracteristics!=null){
                getZoomCaracteristics.previewRequestINT(cameraRecteub);

            }
            if(getZoomCaracteristics!=null){
                try {
                    getZoomCaracteristics.captureSession();
                } catch (CameraAccessException e) {
                    e.printStackTrace();
                }}
            return true;
        }
    }

}

You could also do a simpler version of this view, through a more basic TextureView.

Once the code for the view is down, add it to your layout.

Now, onwards to MainActivity.

2. MainActivity:

It’s crucial to set up camera orientation values and configure other important parameters to achieve picture-perfect results. Take a look at the following code block for initializing essential camera variables such as camera states, media recording settings, texture view, image reader, file path, flash mode, and more.

For reference, the definitions of the CameraService Class will be provided later in this article.

Below you will find the variable definitions with various comments explaining how each one will be used in the context of our code (add them to the MainActivity class) :

2.1. Variables:
 // SparseIntArray that maps device orientation to degrees
    private static final SparseIntArray ORIENTATIONS = new SparseIntArray();

    // Assigning device orientation to degrees
    static {
        ORIENTATIONS.append(Surface.ROTATION_0, 90);
        ORIENTATIONS.append(Surface.ROTATION_90, 0);
        ORIENTATIONS.append(Surface.ROTATION_180, 270);
        ORIENTATIONS.append(Surface.ROTATION_270, 180);
    }

    // States of the camera state machine
    private static final int STATE_PREVIEW = 0;
    private static final int STATE_WAITING_LOCK = 1;
    private static final int STATE_WAITING_PRECAPTURE = 2;
    private static final int STATE_WAITING_NON_PRECAPTURE = 3;
    private static final int STATE_PICTURE_TAKEN = 4;

    // Maximum preview size of the camera
    private static final int MAX_PREVIEW_WIDTH = 1920;
    private static final int MAX_PREVIEW_HEIGHT = 1080;



    // Array of available camera services
    private CameraService[] cameraServiceList;

   // Index of the currently opened camera (default to zero)
    private int openedCamera=0;

    // Texture view to display the camera preview
    private AutoFitTextureView mTextureView;

    // Size of the camera preview
    private Size mPreviewSize;

    // Background thread for camera operations
    private HandlerThread mBackgroundThread;

    // Handler for the background thread
    private Handler mBackgroundHandler;

    // Image reader to capture still images
    private ImageReader mImageReader;

    // Byte array to store the captured image
    public byte[] byteArrayImage;

    // Builder for the camera preview request
    private CaptureRequest.Builder mPreviewRequestBuilder;

    // Camera preview request
    private CaptureRequest mPreviewRequest;

    // Current state of the camera state machine
    private int mState = STATE_PREVIEW;

    // Semaphore used to lock the camera while in use

    private Semaphore mCameraOpenCloseLock = new Semaphore(1);
    /*A Semaphore is a synchronization primitive in Java that is used to control access to a shared resource.
    The count of a Semaphore represents the number of permits available to access the shared resource.
    In this case, the Semaphore is being used to control access to a camera resource.
    The count of 1 means that only one thread can access the camera resource at a time,
    so this Semaphore is being used to enforce mutual exclusion and prevent multiple threads
    from accessing the camera resource simultaneously.The Semaphore will be acquired (decremented)
    when a thread requests access to the camera,
    and it will be released (incremented) when the thread is finished using the camera.*/

    // Boolean to indicate if the device's flash is supported
    private boolean mFlashSupported;

    // Orientation of the camera sensor
    private int mSensorOrientation;

    // Flash mode, 0 for off, 1 for auto, 2 for always on
    private int flashMode;

Now let’s have a look at the OnCreate Method:

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        // Classic Android, you have to find your view by ID
        mTextureView=findViewById(R.id.autoFitTextureView);


        startBackgroundThread(); // Go check below what this method does (it launches the thread to use the camera

        if (mTextureView.isAvailable()) {
            try {
                // Launch camera if the textureView is ready
                openCamera(mTextureView.getWidth(), mTextureView.getHeight());
            } catch (CameraAccessException e) {
                e.printStackTrace();
            }
        } else {
            // Wait for the texture View to be ready to launch camera
            mTextureView.setSurfaceTextureListener(autoFitTextureListener);
        }

    }

In the OnCreate method, you can see that we set up the AutoFitTextureView to display the camera preview. It starts a background thread to use the camera and launches the camera if the TextureView is ready. If the TextureView isn’t ready, a listener is attached to it, as defined below:

 // Variable to set texture view
    private final TextureView.SurfaceTextureListener autoFitTextureListener
            = new TextureView.SurfaceTextureListener() {

        @Override
        public void onSurfaceTextureAvailable(SurfaceTexture texture, int width, int height) {

            try {
                openCamera(width, height);
            } catch (CameraAccessException e) {
                e.printStackTrace();
            }
        }
        @Override
        public void onSurfaceTextureSizeChanged(SurfaceTexture texture, int width, int height) {
            configureTransform(width, height);
        }
        @Override
        public boolean onSurfaceTextureDestroyed(SurfaceTexture texture) {
            return true;
        }
        @Override
        public void onSurfaceTextureUpdated(SurfaceTexture texture) {
        }

    }; 

Let’s break down the methods used in onCreate and in the listener:

2.2. openCamera Method:
    private void openCamera(int width, int height) throws CameraAccessException {
        // Check if the app has permission to access the camera. If you encounter an error here, make sure to import android.Manifest in your activity. 
        if (ContextCompat.checkSelfPermission(this, Manifest.permission.CAMERA)
                != PackageManager.PERMISSION_GRANTED) {
            // If not, return and do not proceed with opening the camera
            return;
        }

        // Set up camera outputs and transform
        setUpCameraOutputs(width, height);
        configureTransform(width, height);

        // Get an instance of the CameraManager
        CameraManager manager = (CameraManager) getSystemService(Context.CAMERA_SERVICE);
        try {
            // Use a lock to ensure proper opening and closing of the camera
            if (!mCameraOpenCloseLock.tryAcquire(2500, TimeUnit.MILLISECONDS)) {
                throw new RuntimeException("Time out waiting to lock camera opening.");
            }
            // Open the camera with the given camera ID and set callbacks to handle camera events
            manager.openCamera(cameraServiceList[openedCamera].CameraIDD, cameraStateCallback, mBackgroundHandler);
        } catch (CameraAccessException e) {
            e.printStackTrace();
        } catch (InterruptedException e) {
            throw new RuntimeException("Interrupted while trying to lock camera opening.", e);
        }
    }

The openCamera method in this code block is responsible for opening the camera using the Camera2 API. It takes in the width and height of the camera preview surface and sets up the camera outputs and transform (to ensure that the preview displays it correctly without deformations). It also uses a lock to ensure that the camera is opened and closed properly. If the app does not have the necessary permission to access the camera, the method returns and does not proceed with opening the camera.

When the method is launched, it runs two operations first:

2.3. SetUpCameraOutputs:
    private void setUpCameraOutputs(int width, int height) throws CameraAccessException {
        Activity activity = this;
        CameraManager manager = (CameraManager) activity.getSystemService(Context.CAMERA_SERVICE);
        // Get the list of available cameras:
        cameraServiceList= new CameraService[manager.getCameraIdList().length];
        mTextureView.setGetZoomCaracteristics(new AutoFitTextureView.getZoomCaracteristics() {
            @Override
            public Rect giveRectZoom() throws CameraAccessException {
                CameraManager manager12 = (CameraManager) getSystemService(Context.CAMERA_SERVICE);
                CameraCharacteristics characteristics = manager12.getCameraCharacteristics(cameraServiceList[openedCamera].CameraIDD);
                return characteristics.get(CameraCharacteristics.SENSOR_INFO_ACTIVE_ARRAY_SIZE);
            }

            @Override
            public float giveMaxZoom() throws CameraAccessException {
                CameraManager manager12 = (CameraManager) getSystemService(Context.CAMERA_SERVICE);
                CameraCharacteristics characteristics = manager12.getCameraCharacteristics(cameraServiceList[openedCamera].CameraIDD);
                return characteristics.get(CameraCharacteristics.SCALER_AVAILABLE_MAX_DIGITAL_ZOOM)*10 ;
            }

            @Override
            public void previewRequestINT(Rect rect) {
                mPreviewRequestBuilder.set(CaptureRequest.SCALER_CROP_REGION, rect);
            }

            @Override
            public void captureSession() throws CameraAccessException {
                cameraServiceList[openedCamera].captureSession.setRepeatingRequest(mPreviewRequestBuilder.build(), null,
                        mBackgroundHandler);
            }
        });
        // Retrieve information about each camera:
        try {
            for (String cameraId : manager.getCameraIdList()) {
                boolean poscam=true; // true if camera is front-facing, false otherwise
                // Retrieve the characteristics of the camera
                CameraCharacteristics characteristics = manager.getCameraCharacteristics(cameraId);

                Integer facing = characteristics.get(CameraCharacteristics.LENS_FACING);
                if (facing != null && facing == CameraCharacteristics.LENS_FACING_FRONT) {
                    poscam=true;
                    // poscam is true if it looks towards the face :)
                }
                else if (facing ==  CameraCharacteristics.LENS_FACING_BACK)
                {
                    poscam=false;
                    // poscam is false if we use the back camera
                }

                StreamConfigurationMap map = characteristics.get(
                        CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
                if (map == null) {
                     /*The if (map == null) condition checks if the
                     CameraCharacteristics for the current camera ID is null. If it is null,
                     the loop continues to the next camera ID, thereby skipping any further
                     processing for that particular camera.*/
                    continue;
                }

                // this allows as to see the largest dimensions we can get
                Size largest = Collections.max(
                        Arrays.asList(map.getOutputSizes(ImageFormat.JPEG)),
                        new CompareSizesByArea());

// Find out if we need to swap dimension to get the preview size relative to sensor coordinate.
                int displayRotation = activity.getDisplay().getRotation();

                mSensorOrientation = characteristics.get(CameraCharacteristics.SENSOR_ORIENTATION);

                boolean swappedDimensions = false;
                switch (displayRotation) {
                    case Surface.ROTATION_0:
                    case Surface.ROTATION_180:
                        if (mSensorOrientation == 90 || mSensorOrientation == 270) {
                            swappedDimensions = true;
                        }
                        break;
                    case Surface.ROTATION_90:
                    case Surface.ROTATION_270:
                        if (mSensorOrientation == 0 || mSensorOrientation == 180) {
                            swappedDimensions = true;
                        }
                        break;
                    default:
                }

                // Get the dimensions of the application window using WindowMetrics
                WindowMetrics windowMetrics = activity.getWindowManager().getCurrentWindowMetrics();
                Rect bounds = windowMetrics.getBounds();
                int display_X = bounds.width();
                int display_Y = bounds.height();
                // Initialize variables for preview size
                int rotatedPreviewWidth = width;
                int rotatedPreviewHeight = height;
                int maxPreviewWidth = display_X;
                int maxPreviewHeight = display_Y;

                // Swap dimensions if needed
                if (swappedDimensions) {
                    rotatedPreviewWidth = height;
                    rotatedPreviewHeight = width;
                    maxPreviewWidth = display_X;
                    maxPreviewHeight = display_Y;
                }

                // Limit the max preview width and height
                if (maxPreviewWidth > MAX_PREVIEW_WIDTH) {
                    maxPreviewWidth = MAX_PREVIEW_WIDTH;
                }

                if (maxPreviewHeight > MAX_PREVIEW_HEIGHT) {
                    maxPreviewHeight = MAX_PREVIEW_HEIGHT;
                }

                // Choose the optimal preview size based on the available sizes and the desired dimensions.
                mPreviewSize = chooseOptimalSize(map.getOutputSizes(SurfaceTexture.class),
                        rotatedPreviewWidth, rotatedPreviewHeight, maxPreviewWidth,
                        maxPreviewHeight, largest);

                // We fit the aspect ratio of TextureView to the size of preview we picked.
                int orientation = getResources().getConfiguration().orientation;
                if (orientation == Configuration.ORIENTATION_LANDSCAPE) {
                    mTextureView.setAspectRatio(
                            mPreviewSize.getWidth(), mPreviewSize.getHeight());
                } else {
                    mTextureView.setAspectRatio(
                            mPreviewSize.getHeight(), mPreviewSize.getWidth());
                }

                // Check if the flash is supported.
                Boolean available = characteristics.get(CameraCharacteristics.FLASH_INFO_AVAILABLE);
                mFlashSupported = available == null ? false : available;

                // Initialize the CameraService for the current camera.
                cameraServiceList[Integer.parseInt(cameraId)]=new CameraService(cameraId,poscam);
                cameraServiceList[Integer.parseInt(cameraId)].setSensorOrientation(mSensorOrientation);
                cameraServiceList[Integer.parseInt(cameraId)].setFlashSupported(mFlashSupported);


            }
        } catch (CameraAccessException e) {
            e.printStackTrace();
        } catch (NullPointerException e) {

        }
    }

The method takes in the width and height of the camera preview as parameters and throws a CameraAccessException.

We start by initializing the camera manager, getting the list of available cameras, and setting up the custom textureview (AutoFitTextureView). The method then loops through each available camera, getting its characteristics and setting up an image reader for still image captures.

Then it determines the display rotation and whether the preview dimensions need to be swapped to match the sensor coordinate. It chooses the optimal preview size based on the available sizes and the display dimensions. It also checks if the camera flash is supported and sets up a CameraService object to store information about the camera.

The CameraService class is defined below, and allows us to store information on each camera:

    public class CameraService{
        String CameraIDD;  // A unique identifier for the camera
        CameraDevice mCameraDevice; // A reference to the CameraDevice object
        boolean frontOrBackCamera; // A boolean to indicate if the camera is front-facing or back-facing
        boolean flashSupported; // A boolean to indicate if the camera has a flash
        CameraCaptureSession captureSession; // A reference to the CameraCaptureSession object
        int sensorOrientation; // An integer to indicate the orientation of the camera sensor

        // Getter and setter methods for the member variables
        public int getSensorOrientation() {
            return sensorOrientation;
        }

        public void setSensorOrientation(int sensorOrientation) {
            this.sensorOrientation = sensorOrientation;
        }

        public String getCameraIDD() {
            return CameraIDD;
        }

        public void setCameraIDD(String cameraIDD) {
            CameraIDD = cameraIDD;
        }

        public CameraDevice getmCameraDevice() {
            return mCameraDevice;
        }

        public void setmCameraDevice(CameraDevice mCameraDevice) {
            this.mCameraDevice = mCameraDevice;
        }

        public boolean isFrontOrBackCamera() {
            return frontOrBackCamera;
        }

        public void setFrontOrBackCamera(boolean frontOrBackCamera) {
            this.frontOrBackCamera = frontOrBackCamera;
        }

        public boolean isFlashSupported() {
            return flashSupported;
        }

        public void setCaptureSession(CameraCaptureSession captureSession) {
            this.captureSession = captureSession;
        }

        public CameraCaptureSession getCaptureSession() {
            return captureSession;
        }

        // Constructor to initialize the member variables
        public CameraService(String cameraIDD, boolean frontOrBackCamera) {
            this.CameraIDD = cameraIDD;
            this.frontOrBackCamera = frontOrBackCamera;
        }

        public void setFlashSupported(boolean flashSupported) {
            this.flashSupported = flashSupported;
        }
    }

The purpose of this class is to provide a wrapper around the camera device and its associated objects and properties. It contains member variables to store information such as the camera identifier, CameraDevice object, whether the camera is front-facing or back-facing, whether it has a flash, and the CameraCaptureSession object. It also provides getter and setter methods for these member variables. This class can be used to manage the camera device and its properties.

Finally, the method sets up a getZoomCaracteristics object for zooming in on the camera preview and sets up the camera capture session.

The method also relies on two misc methods chooseOptimalSize and CompareSizesByArea:

  private static Size chooseOptimalSize(Size[] choices, int textureViewWidth, int textureViewHeight, int maxWidth, int maxHeight, Size aspectRatio) {

        // Collect the supported resolutions that are at least as big as the preview Surface
        List<Size> bigEnough = new ArrayList<>();
        // Collect the supported resolutions that are smaller than the preview Surface
        List<Size> notBigEnough = new ArrayList<>();
        int w = aspectRatio.getWidth();
        int h = aspectRatio.getHeight();
        for (Size option : choices) {
            if (option.getWidth() <= maxWidth && option.getHeight() <= maxHeight &&
                    option.getHeight() == option.getWidth() * h / w) {
                if (option.getWidth() >= textureViewWidth &&
                        option.getHeight() >= textureViewHeight) {
                    bigEnough.add(option);
                } else {
                    notBigEnough.add(option);
                }
            }
        }

        // Pick the smallest of those big enough. If there is no one big enough, pick the
        // largest of those not big enough.
        if (bigEnough.size() > 0) {
            return Collections.min(bigEnough, new CompareSizesByArea());
        } else if (notBigEnough.size() > 0) {
            return Collections.max(notBigEnough, new CompareSizesByArea());
        } else {
            return choices[0];
        }
    }
 static class CompareSizesByArea implements Comparator<Size> {

        @Override
        public int compare(Size lhs, Size rhs) {
            // We cast here to ensure the multiplications won't overflow
            return Long.signum((long) lhs.getWidth() * lhs.getHeight() -
                    (long) rhs.getWidth() * rhs.getHeight());
        }

    }

Now lets return to openCamera and focus on the next method:

2.4. configureTransform
    private void configureTransform(int viewWidth, int viewHeight) {
        Activity activity = this;

        // Check if TextureView, preview size or activity are null, then return
        if (null == mTextureView || null == mPreviewSize || null == activity) {
            return;
        }

        // Get the current rotation of the display
        int rotation = activity.getWindowManager().getDefaultDisplay().getRotation();

        // Create a new Matrix object
        Matrix matrix = new Matrix();

        // Create a rectangle representing the view bounds
        RectF viewRect = new RectF(0, 0, viewWidth, viewHeight);

        // Create a rectangle representing the preview size
        RectF bufferRect = new RectF(0, 0, mPreviewSize.getHeight(), mPreviewSize.getWidth());

        // Calculate the center point of the view bounds
        float centerX = viewRect.centerX();
        float centerY = viewRect.centerY();

        // If the rotation is 90 or 270 degrees, adjust the buffer rectangle
        if (Surface.ROTATION_90 == rotation || Surface.ROTATION_270 == rotation) {
            bufferRect.offset(centerX - bufferRect.centerX(), centerY - bufferRect.centerY());

            // Scale the preview to fill the view, then rotate it
            matrix.setRectToRect(viewRect, bufferRect, Matrix.ScaleToFit.FILL);
            float scale = Math.max(
                    (float) viewHeight / mPreviewSize.getHeight(),
                    (float) viewWidth / mPreviewSize.getWidth());
            matrix.postScale(scale, scale, centerX, centerY);
            matrix.postRotate(90 * (rotation - 2), centerX, centerY);

            // Store the rotation angle in a variable
            int rot= 90 * (rotation - 2);
        }
        // If the rotation is 180 degrees, just rotate the preview
        else if (Surface.ROTATION_180 == rotation) {
            matrix.postRotate(180, centerX, centerY);
        }

        // Apply the matrix to the TextureView
        mTextureView.setTransform(matrix);
    }

The configureTransform() method is responsible for setting up the transformation matrix for the TextureView, which displays the camera preview. The method takes in the dimensions of the TextureView as parameters and checks that all necessary components (mTextureView, mPreviewSize, and activity) are not null before proceeding.

Next, the method gets the current device rotation from the WindowManager and initializes a new Matrix object. It also creates two RectF objects to represent the view and preview areas, respectively, and calculates the center point of the view.

If the device is rotated 90 or 270 degrees, the method calculates the scaling factor required to fill the view with the preview and sets up the transformation matrix accordingly. The matrix is first set to fill the entire view, then scaled by the appropriate factor, and finally rotated by an angle based on the current device rotation. The resulting transformation is then applied to the TextureView.

If the device is rotated 180 degrees, the method simply rotates the matrix by 180 degrees and applies it to the TextureView.

Let’s go back once again to openCamera and focus on the second part of the code (no need to copy and paste it again if you already have it) :

  // Get an instance of the CameraManager
    CameraManager manager = (CameraManager) getSystemService(Context.CAMERA_SERVICE);
    try {
        // Use a lock to ensure proper opening and closing of the camera
        if (!mCameraOpenCloseLock.tryAcquire(2500, TimeUnit.MILLISECONDS)) {
            throw new RuntimeException("Time out waiting to lock camera opening.");
        }
        // Open the camera with the given camera ID and set callbacks to handle camera events
        manager.openCamera(cameraServiceList[openedCamera].CameraIDD, mStateCallback, mBackgroundHandler);
    } catch (CameraAccessException e) {
        e.printStackTrace();
    } catch (InterruptedException e) {
        throw new RuntimeException("Interrupted while trying to lock camera opening.", e);
    }
}

At this stage, if no exceptions or errors have been raised cameraServiceList should contain a list of the available cameras and their characteristics.

The cameraManager will then try to run the openCamera, using the cameraID as input, as well as cameraStateCallback and mBackgroundHandler, which we’ll define now.

2.5. cameraStateCallback
    private final CameraDevice.StateCallback cameraStateCallback = new CameraDevice.StateCallback() {

        @Override
        public void onOpened(@NonNull CameraDevice cameraDevice) {
            // This method is called when the camera is opened. We release the lock and set the camera device
            // for the current camera service, then create a camera preview session.
            mCameraOpenCloseLock.release();
            cameraServiceList[openedCamera].mCameraDevice = cameraDevice;
            createCameraPreviewSession();
        }

        @Override
        public void onDisconnected(@NonNull CameraDevice cameraDevice) {
            // This method is called when the camera is disconnected. We release the lock, close the camera device,
            // and set the camera device for the current camera service to null.
            mCameraOpenCloseLock.release();
            cameraDevice.close();
            cameraServiceList[openedCamera].mCameraDevice  = null;
        }

        @Override
        public void onError(@NonNull CameraDevice cameraDevice, int error) {
            // This method is called when an error occurs with the camera device. We release the lock, close the camera device,
            // set the camera device for the current camera service to null, and finish the activity.
            mCameraOpenCloseLock.release();
            cameraDevice.close();
            cameraServiceList[openedCamera].mCameraDevice  = null;
            finish();

        }
    };

This code defines an implementation of the CameraDevice.StateCallback interface as a private final variable named cameraStateCallback. This interface provides methods that are called when the state of the camera device changes.

The onOpened() method is called when the camera device is opened, and the method releases a lock that was previously held by mCameraOpenCloseLock. Then, it sets the CameraDevice object in the corresponding CameraService object’s mCameraDevice variable and calls createCameraPreviewSession().

    private void createCameraPreviewSession() {
        try {

            SurfaceTexture texture = mTextureView.getSurfaceTexture();
            assert texture != null;
            // We configure the size of default buffer to be the size of camera preview we want.
            texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());

            // This is the output Surface we need to start preview.
            Surface surface = new Surface(texture);

            // We set up a CaptureRequest.Builder with the output Surface.
            mPreviewRequestBuilder
                    = cameraServiceList[openedCamera].mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_RECORD);
            mPreviewRequestBuilder.addTarget(surface);

            // Here, we create a CameraCaptureSession for camera preview.

            cameraServiceList[openedCamera].mCameraDevice.createCaptureSession(Arrays.asList(surface),
                    new CameraCaptureSession.StateCallback() {
                /* the createCaptureSession is deprecated, as of writing this tutorial, the code provided still works.
                  The new way of implementing this method is through createCaptureSession(SessionConfiguration).*/

                        @Override
                        public void onConfigured(@NonNull CameraCaptureSession cameraCaptureSession) {
                            // The camera is already closed

                            if (null == cameraServiceList[openedCamera].mCameraDevice) {

                                return;
                            }

                            // When the session is ready, we start displaying the preview.
                            cameraServiceList[openedCamera].captureSession=cameraCaptureSession;
                            try {
                                // Auto focus should be continuous for camera preview.
                                mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE,
                                        CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);


                                // Finally, we start displaying the camera preview.
                                mPreviewRequest = mPreviewRequestBuilder.build();
                                cameraServiceList[openedCamera].captureSession.setRepeatingRequest(mPreviewRequest,null, mBackgroundHandler);
                            } catch (CameraAccessException e) {
                                e.printStackTrace();
                            }
                            updatePreview();
                        }

                        @Override
                        public void onConfigureFailed(
                                @NonNull CameraCaptureSession cameraCaptureSession) {
                        }
                    }, null
            );
        } catch (CameraAccessException e) {
            e.printStackTrace();
        }
    }

The method begins by getting the SurfaceTexture object from our AutofitTextureView and setting its default buffer size to the size of the camera preview. A Surface object is then created using the SurfaceTexture.

Next, a CaptureRequest.Builder object is created for the camera device with a TEMPLATE_RECORD template, and the Surface object is added as a target.

A CameraCaptureSession is then created using the CameraDevice and the Surface object. The onConfigured() method of the StateCallback interface is implemented to handle when the session is ready. If the camera device is closed, the method returns, otherwise the preview is started by setting the focus mode and building and repeating the CaptureRequest using the CameraCaptureSession. It will launch the updatePreview() method that we define below:

    private void updatePreview() {
        if (null == cameraServiceList[openedCamera].mCameraDevice) {
            return;
        }
        try {
            setUpCaptureRequestBuilder(mPreviewRequestBuilder);
            cameraServiceList[openedCamera].captureSession.setRepeatingRequest(mPreviewRequestBuilder.build(), null, mBackgroundHandler);
        } catch (CameraAccessException e) {
            e.printStackTrace();
        }
    }

setUpCaptureRequestBuilder(mPreviewRequestBuilder) is defined below:

private void setUpCaptureRequestBuilder(CaptureRequest.Builder builder) {
builder.set(CaptureRequest.CONTROL_MODE, CameraMetadata.CONTROL_MODE_AUTO);
}

Back to cameraStateCallback :

The onDisconnected() method is called when the camera device is disconnected. It releases the lock that was held by mCameraOpenCloseLock, closes the camera device, and sets the corresponding CameraService object’s mCameraDevice variable to null.

The onError() method is called when an error occurs in the camera device. It releases the lock that was held by mCameraOpenCloseLock, closes the camera device, sets the corresponding CameraService object’s mCameraDevice variable to null, and finishes the activity.

2.6. mBackgroundHandler:
private void startBackgroundThread() {
    mBackgroundThread = new HandlerThread("CameraBackground");
    mBackgroundThread.start();
    /* this is to allow the camera operations to run on a separate thread and avoid blocking the UI*/
    mBackgroundHandler = new Handler(mBackgroundThread.getLooper());
    /* this is to communicate with the thread*/
}
private void stopBackgroundThread() {
mBackgroundThread.quitSafely();
try {
mBackgroundThread.join();
mBackgroundThread = null;
mBackgroundHandler = null;
} catch (InterruptedException e) {
e.printStackTrace();
}
}

Final Thoughts:

The entire code of this tutorial is available on gihub at the following link:

https://github.com/lizardanddog/tutoForCamera

We have also included in the code a method to close the camera (defined below):

    private void closeCamera() {
        try {
            mCameraOpenCloseLock.acquire();
            if (null != cameraServiceList[openedCamera].captureSession) {
                cameraServiceList[openedCamera].captureSession.close();
                cameraServiceList[openedCamera].captureSession = null;
            }
            if (null != cameraServiceList[openedCamera].mCameraDevice) {
                cameraServiceList[openedCamera].mCameraDevice.close();
                cameraServiceList[openedCamera].mCameraDevice = null;
            }
            if (null != mImageReader) {
                mImageReader.close();
                mImageReader = null;
            }
        } catch (InterruptedException e) {
            throw new RuntimeException("Interrupted while trying to lock camera closing.", e);
        } finally {
            mCameraOpenCloseLock.release();
        }
    }

In the next part we’ll focus on how to capture images and videos and save them to the phone.

About us:

https://www.cloco.ai

https://www.lizardanddog.com

Leave a comment