多鏡頭 API

注意:本頁面參照 Camera2 套件。除非應用程式需要 Camera2 的特定低階功能,否則建議使用 CameraX。CameraX 和 Camera2 均支援 Android 5.0 (API 級別 21) 以上版本。

Android 9 (API 級別 28) 推出了多鏡頭功能。自從裝置推出以來,裝置不斷上市支援這個 API。許多多鏡頭用途都與特定的硬體設定緊密結合。換句話說,並非所有用途都與每部裝置相容,因此多相機功能非常適合使用 Play Feature Delivery

常見的用途包括:

  • 縮放:根據裁剪區域或所需的焦距在鏡頭之間切換。
  • 深度:使用多部相機建立深度地圖。
  • 散景:使用推測的深度資訊,模擬類似數位單眼相機的窄聚焦範圍。

邏輯相機和實體相機之間的差異

您必須瞭解邏輯和實體相機之間的差異,才能瞭解多鏡頭 API。假設裝置有三個後置鏡頭,以供參考。在此範例中,每三個後置鏡頭都視為一個實體相機。邏輯攝影機是指兩部以上的實體相機分組。邏輯攝影機的輸出內容可以是其中一個基礎實體相機提供的串流,或同時來自多部基礎實體相機的融合串流。無論是哪一種方式,串流都會由相機硬體抽象層 (HAL) 進行處理。

許多手機製造商會開發第一方相機應用程式,這類應用程式通常已預先安裝在他們的裝置上。如要使用硬體的所有功能,可能會使用私密或隱藏 API,或是根據其他應用程式無法存取的驅動程式實作程序,以特殊處理方式處理。有些裝置會提供來自不同實體相機的整合式畫面,但僅提供給某些具有特殊權限的應用程式,藉此實作邏輯相機的概念。通常,架構只會曝露其中一個實體相機。以下圖表說明第三方開發人員在 Android 9 之前的情況:

圖 1 相機功能通常僅適用於具有特殊權限的應用程式

從 Android 9 開始,Android 應用程式不再支援私人 API。由於架構中納入了多相機支援功能,Android 最佳做法強烈建議手機製造商為朝相同方向的所有實體相機提供邏輯相機。以下是第三方開發人員在搭載 Android 9 以上版本的裝置上應會看到的內容:

圖 2. 自 Android 9 以上版本起,開發人員具備所有相機裝置的完整存取權

邏輯相機提供的邏輯完全取決於 Camera HAL 的 OEM 實作方式。舉例來說,Pixel 3 等裝置實作其邏輯相機的方式,會根據要求的焦距和裁剪區域選擇其中一個實體相機。

多鏡頭 API

這個新 API 新增下列常數、類別和方法:

由於 Android 相容性定義文件 (CDD) 有所異動,多相機 API 也滿足開發人員的特定期望。Android 9 之前就有雙鏡頭裝置,但同時開啟多部相機,同時涉及試驗和發生錯誤。在 Android 9 以上版本中,多相機會提供一組規則,以指定何時可以開啟同一邏輯相機一部分的實體相機。

在大多數情況下,搭載 Android 9 以上版本的裝置公開所有實體相機 (但紅外線等不常見的感應器類型除外),以及易於使用的邏輯相機。針對每一組保證能正常運作的串流組合,屬於邏輯相機的串流,可做為基礎實體相機的兩個串流來取代。

同時進行多場直播活動

同時使用多個相機串流包含在單一攝影機中同時使用多個串流的規則。只要新增一項重要功能,就能同時為多部相機套用相同的規則。 CameraMetadata.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA 說明如何以兩個實體串流取代邏輯 YUV_420_888 或原始串流。也就是說,每個 YUV 或 RAW 類型串流都可替換為兩種相同類型和大小的串流。您一開始可以先從下列單一相機裝置保證設定的攝影機串流著手:

  • 串流 1:YUV 類型,MAXIMUM 尺寸,來自邏輯相機 id = 0

然後透過支援多鏡頭的裝置建立工作階段,來取代包含兩個實體串流的邏輯 YUV 串流:

  • 串流 1:YUV 類型,MAXIMUM 實體相機 id = 1 尺寸
  • 串流 2:YUV 類型,MAXIMUM 實體相機 id = 2 尺寸

只有在兩部攝影機屬於 CameraCharacteristics.getPhysicalCameraIds() 下方的邏輯相機群組時,您才能將 YUV 或 RAW 串流替換為兩個對等的串流。

架構提供的保證只是從多個實體相機同時取得畫面所需的最低需求。大多數裝置都支援額外的串流,有時甚至允許單獨開啟多個實體相機裝置。因為這並非架構的不可保證,因此需要透過測試和錯誤,對各裝置執行測試和微調。

建立包含多個實體相機的工作階段

在啟用多相機的裝置上使用實體相機時,請開啟一個 CameraDevice (邏輯相機),並在單一工作階段中與其互動。使用 API 級別 28 中新增的 API CameraDevice.createCaptureSession(SessionConfiguration config) 建立單一工作階段。工作階段設定包含多個輸出設定,每個設定都具有一組輸出目標,並視需要設定所需的實體相機 ID。

圖 3 SessionConfiguration 和 OutputConfiguration 模型

擷取要求會有一個相關聯的輸出目標。該架構會依據附加的輸出目標,決定要將要求傳送到哪個實體 (或邏輯) 相機。如果輸出目標對應至已做為輸出設定並傳送實體相機 ID 的其中一個輸出目標,則該實體相機會收到並處理要求。

使用兩部實體相機

多鏡頭專屬的相機 API 也能夠識別邏輯相機,並找出其後方的實體相機。您可以定義函式來識別可能的實體相機組合,以取代其中一個邏輯相機串流:

Kotlin

/**
     * Helper class used to encapsulate a logical camera and two underlying
     * physical cameras
     */
    data class DualCamera(val logicalId: String, val physicalId1: String, val physicalId2: String)

    fun findDualCameras(manager: CameraManager, facing: Int? = null): List {
        val dualCameras = MutableList()

        // Iterate over all the available camera characteristics
        manager.cameraIdList.map {
            Pair(manager.getCameraCharacteristics(it), it)
        }.filter {
            // Filter by cameras facing the requested direction
            facing == null || it.first.get(CameraCharacteristics.LENS_FACING) == facing
        }.filter {
            // Filter by logical cameras
            // CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA requires API >= 28
            it.first.get(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES)!!.contains(
                CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA)
        }.forEach {
            // All possible pairs from the list of physical cameras are valid results
            // NOTE: There could be N physical cameras as part of a logical camera grouping
            // getPhysicalCameraIds() requires API >= 28
            val physicalCameras = it.first.physicalCameraIds.toTypedArray()
            for (idx1 in 0 until physicalCameras.size) {
                for (idx2 in (idx1 + 1) until physicalCameras.size) {
                    dualCameras.add(DualCamera(
                        it.second, physicalCameras[idx1], physicalCameras[idx2]))
                }
            }
        }

        return dualCameras
    }

Java

/**
     * Helper class used to encapsulate a logical camera and two underlying
     * physical cameras
     */
    final class DualCamera {
        final String logicalId;
        final String physicalId1;
        final String physicalId2;

        DualCamera(String logicalId, String physicalId1, String physicalId2) {
            this.logicalId = logicalId;
            this.physicalId1 = physicalId1;
            this.physicalId2 = physicalId2;
        }
    }
    List findDualCameras(CameraManager manager, Integer facing) {
        List dualCameras = new ArrayList<>();

        List cameraIdList;
        try {
            cameraIdList = Arrays.asList(manager.getCameraIdList());
        } catch (CameraAccessException e) {
            e.printStackTrace();
            cameraIdList = new ArrayList<>();
        }

        // Iterate over all the available camera characteristics
        cameraIdList.stream()
                .map(id -> {
                    try {
                        CameraCharacteristics characteristics = manager.getCameraCharacteristics(id);
                        return new Pair<>(characteristics, id);
                    } catch (CameraAccessException e) {
                        e.printStackTrace();
                        return null;
                    }
                })
                .filter(pair -> {
                    // Filter by cameras facing the requested direction
                    return (pair != null) &&
                            (facing == null || pair.first.get(CameraCharacteristics.LENS_FACING).equals(facing));
                })
                .filter(pair -> {
                    // Filter by logical cameras
                    // CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA requires API >= 28
                    IntPredicate logicalMultiCameraPred =
                            arg -> arg == CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES_LOGICAL_MULTI_CAMERA;
                    return Arrays.stream(pair.first.get(CameraCharacteristics.REQUEST_AVAILABLE_CAPABILITIES))
                            .anyMatch(logicalMultiCameraPred);
                })
                .forEach(pair -> {
                    // All possible pairs from the list of physical cameras are valid results
                    // NOTE: There could be N physical cameras as part of a logical camera grouping
                    // getPhysicalCameraIds() requires API >= 28
                    String[] physicalCameras = pair.first.getPhysicalCameraIds().toArray(new String[0]);
                    for (int idx1 = 0; idx1 < physicalCameras.length; idx1++) {
                        for (int idx2 = idx1 + 1; idx2 < physicalCameras.length; idx2++) {
                            dualCameras.add(
                                    new DualCamera(pair.second, physicalCameras[idx1], physicalCameras[idx2]));
                        }
                    }
                });
return dualCameras;
}

實體相機的狀態處理方式是由邏輯相機所控制。如要開啟「雙鏡頭」,請開啟與實體相機對應的邏輯相機:

Kotlin

fun openDualCamera(cameraManager: CameraManager,
                       dualCamera: DualCamera,
        // AsyncTask is deprecated beginning API 30
                       executor: Executor = AsyncTask.SERIAL_EXECUTOR,
                       callback: (CameraDevice) -> Unit) {

        // openCamera() requires API >= 28
        cameraManager.openCamera(
            dualCamera.logicalId, executor, object : CameraDevice.StateCallback() {
                override fun onOpened(device: CameraDevice) = callback(device)
                // Omitting for brevity...
                override fun onError(device: CameraDevice, error: Int) = onDisconnected(device)
                override fun onDisconnected(device: CameraDevice) = device.close()
            })
    }

Java

void openDualCamera(CameraManager cameraManager,
                        DualCamera dualCamera,
                        Executor executor,
                        CameraDeviceCallback cameraDeviceCallback
    ) {

        // openCamera() requires API >= 28
        cameraManager.openCamera(dualCamera.logicalId, executor, new CameraDevice.StateCallback() {
            @Override
            public void onOpened(@NonNull CameraDevice cameraDevice) {
               cameraDeviceCallback.callback(cameraDevice);
            }

            @Override
            public void onDisconnected(@NonNull CameraDevice cameraDevice) {
                cameraDevice.close();
            }

            @Override
            public void onError(@NonNull CameraDevice cameraDevice, int i) {
                onDisconnected(cameraDevice);
            }
        });
    }

除了選取要開啟的相機外,其程序與在舊版 Android 中開啟相機相同。使用新的工作階段設定 API 建立擷取工作階段,會指示架構將特定目標與特定實體相機 ID 建立關聯:

Kotlin

/**
 * Helper type definition that encapsulates 3 sets of output targets:
 *
 *   1. Logical camera
 *   2. First physical camera
 *   3. Second physical camera
 */
typealias DualCameraOutputs =
        Triple?, MutableList?, MutableList?>

fun createDualCameraSession(cameraManager: CameraManager,
                            dualCamera: DualCamera,
                            targets: DualCameraOutputs,
                            // AsyncTask is deprecated beginning API 30
                            executor: Executor = AsyncTask.SERIAL_EXECUTOR,
                            callback: (CameraCaptureSession) -> Unit) {

    // Create 3 sets of output configurations: one for the logical camera, and
    // one for each of the physical cameras.
    val outputConfigsLogical = targets.first?.map { OutputConfiguration(it) }
    val outputConfigsPhysical1 = targets.second?.map {
        OutputConfiguration(it).apply { setPhysicalCameraId(dualCamera.physicalId1) } }
    val outputConfigsPhysical2 = targets.third?.map {
        OutputConfiguration(it).apply { setPhysicalCameraId(dualCamera.physicalId2) } }

    // Put all the output configurations into a single flat array
    val outputConfigsAll = arrayOf(
        outputConfigsLogical, outputConfigsPhysical1, outputConfigsPhysical2)
        .filterNotNull().flatMap { it }

    // Instantiate a session configuration that can be used to create a session
    val sessionConfiguration = SessionConfiguration(
        SessionConfiguration.SESSION_REGULAR,
        outputConfigsAll, executor, object : CameraCaptureSession.StateCallback() {
            override fun onConfigured(session: CameraCaptureSession) = callback(session)
            // Omitting for brevity...
            override fun onConfigureFailed(session: CameraCaptureSession) = session.device.close()
        })

    // Open the logical camera using the previously defined function
    openDualCamera(cameraManager, dualCamera, executor = executor) {

        // Finally create the session and return via callback
        it.createCaptureSession(sessionConfiguration)
    }
}

Java

/**
 * Helper class definition that encapsulates 3 sets of output targets:
 * 

* 1. Logical camera * 2. First physical camera * 3. Second physical camera */ final class DualCameraOutputs { private final List logicalCamera; private final List firstPhysicalCamera; private final List secondPhysicalCamera; public DualCameraOutputs(List logicalCamera, List firstPhysicalCamera, List third) { this.logicalCamera = logicalCamera; this.firstPhysicalCamera = firstPhysicalCamera; this.secondPhysicalCamera = third; } public List getLogicalCamera() { return logicalCamera; } public List getFirstPhysicalCamera() { return firstPhysicalCamera; } public List getSecondPhysicalCamera() { return secondPhysicalCamera; } } interface CameraCaptureSessionCallback { void callback(CameraCaptureSession cameraCaptureSession); } void createDualCameraSession(CameraManager cameraManager, DualCamera dualCamera, DualCameraOutputs targets, Executor executor, CameraCaptureSessionCallback cameraCaptureSessionCallback) { // Create 3 sets of output configurations: one for the logical camera, and // one for each of the physical cameras. List outputConfigsLogical = targets.getLogicalCamera().stream() .map(OutputConfiguration::new) .collect(Collectors.toList()); List outputConfigsPhysical1 = targets.getFirstPhysicalCamera().stream() .map(s -> { OutputConfiguration outputConfiguration = new OutputConfiguration(s); outputConfiguration.setPhysicalCameraId(dualCamera.physicalId1); return outputConfiguration; }) .collect(Collectors.toList()); List outputConfigsPhysical2 = targets.getSecondPhysicalCamera().stream() .map(s -> { OutputConfiguration outputConfiguration = new OutputConfiguration(s); outputConfiguration.setPhysicalCameraId(dualCamera.physicalId2); return outputConfiguration; }) .collect(Collectors.toList()); // Put all the output configurations into a single flat array List outputConfigsAll = Stream.of( outputConfigsLogical, outputConfigsPhysical1, outputConfigsPhysical2 ) .filter(Objects::nonNull) .flatMap(Collection::stream) .collect(Collectors.toList()); // Instantiate a session configuration that can be used to create a session SessionConfiguration sessionConfiguration = new SessionConfiguration( SessionConfiguration.SESSION_REGULAR, outputConfigsAll, executor, new CameraCaptureSession.StateCallback() { @Override public void onConfigured(@NonNull CameraCaptureSession cameraCaptureSession) { cameraCaptureSessionCallback.callback(cameraCaptureSession); } // Omitting for brevity... @Override public void onConfigureFailed(@NonNull CameraCaptureSession cameraCaptureSession) { cameraCaptureSession.getDevice().close(); } }); // Open the logical camera using the previously defined function openDualCamera(cameraManager, dualCamera, executor, (CameraDevice c) -> // Finally create the session and return via callback c.createCaptureSession(sessionConfiguration)); }

如要瞭解支援的串流組合,請參閱 createCaptureSession。合併串流適用於在單一邏輯相機上的多個串流。相容性可延伸至使用相同的設定,並將其中一個串流替換為同一個邏輯相機屬於兩個實體相機的兩個串流。

請在相機工作階段準備就緒後,分派所需的擷取要求。每個擷取要求的目標都會從相關聯的實體相機 (如果有的話) 接收資料,或返回邏輯相機。

Zoom 應用實例

您可將實體攝影機合併為單一串流,讓使用者可在不同的實體相機之間切換,體驗不同的視野,有效擷取不同的「縮放等級」。

圖 4:範例:基於變焦等級用途替換鏡頭 (從 Pixel 3 廣告替換)

首先,請選取一對實體相機,讓使用者能夠切換。為達到最大效果,您可以選擇提供可用最短和最大焦距的兩台攝影機。

Kotlin

fun findShortLongCameraPair(manager: CameraManager, facing: Int? = null): DualCamera? {

    return findDualCameras(manager, facing).map {
        val characteristics1 = manager.getCameraCharacteristics(it.physicalId1)
        val characteristics2 = manager.getCameraCharacteristics(it.physicalId2)

        // Query the focal lengths advertised by each physical camera
        val focalLengths1 = characteristics1.get(
            CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS) ?: floatArrayOf(0F)
        val focalLengths2 = characteristics2.get(
            CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS) ?: floatArrayOf(0F)

        // Compute the largest difference between min and max focal lengths between cameras
        val focalLengthsDiff1 = focalLengths2.maxOrNull()!! - focalLengths1.minOrNull()!!
        val focalLengthsDiff2 = focalLengths1.maxOrNull()!! - focalLengths2.minOrNull()!!

        // Return the pair of camera IDs and the difference between min and max focal lengths
        if (focalLengthsDiff1 < focalLengthsDiff2) {
            Pair(DualCamera(it.logicalId, it.physicalId1, it.physicalId2), focalLengthsDiff1)
        } else {
            Pair(DualCamera(it.logicalId, it.physicalId2, it.physicalId1), focalLengthsDiff2)
        }

        // Return only the pair with the largest difference, or null if no pairs are found
    }.maxByOrNull { it.second }?.first
}

Java

// Utility functions to find min/max value in float[]
    float findMax(float[] array) {
        float max = Float.NEGATIVE_INFINITY;
        for(float cur: array)
            max = Math.max(max, cur);
        return max;
    }
    float findMin(float[] array) {
        float min = Float.NEGATIVE_INFINITY;
        for(float cur: array)
            min = Math.min(min, cur);
        return min;
    }

DualCamera findShortLongCameraPair(CameraManager manager, Integer facing) {
        return findDualCameras(manager, facing).stream()
                .map(c -> {
                    CameraCharacteristics characteristics1;
                    CameraCharacteristics characteristics2;
                    try {
                        characteristics1 = manager.getCameraCharacteristics(c.physicalId1);
                        characteristics2 = manager.getCameraCharacteristics(c.physicalId2);
                    } catch (CameraAccessException e) {
                        e.printStackTrace();
                        return null;
                    }

                    // Query the focal lengths advertised by each physical camera
                    float[] focalLengths1 = characteristics1.get(
                            CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS);
                    float[] focalLengths2 = characteristics2.get(
                            CameraCharacteristics.LENS_INFO_AVAILABLE_FOCAL_LENGTHS);

                    // Compute the largest difference between min and max focal lengths between cameras
                    Float focalLengthsDiff1 = findMax(focalLengths2) - findMin(focalLengths1);
                    Float focalLengthsDiff2 = findMax(focalLengths1) - findMin(focalLengths2);

                    // Return the pair of camera IDs and the difference between min and max focal lengths
                    if (focalLengthsDiff1 < focalLengthsDiff2) {
                        return new Pair<>(new DualCamera(c.logicalId, c.physicalId1, c.physicalId2), focalLengthsDiff1);
                    } else {
                        return new Pair<>(new DualCamera(c.logicalId, c.physicalId2, c.physicalId1), focalLengthsDiff2);
                    }

                }) // Return only the pair with the largest difference, or null if no pairs are found
                .max(Comparator.comparing(pair -> pair.second)).get().first;
    }

為此,合理的架構會是兩個 SurfaceViews,每個串流各一個。這些 SurfaceViews 會根據使用者互動而替換,因此任何指定時間都只會顯示一個。

以下程式碼顯示如何開啟邏輯相機、設定相機輸出內容、建立相機工作階段,以及啟動兩個預覽串流:

Kotlin

val cameraManager: CameraManager = ...

// Get the two output targets from the activity / fragment
val surface1 = ...  // from SurfaceView
val surface2 = ...  // from SurfaceView

val dualCamera = findShortLongCameraPair(manager)!!
val outputTargets = DualCameraOutputs(
    null, mutableListOf(surface1), mutableListOf(surface2))

// Here you open the logical camera, configure the outputs and create a session
createDualCameraSession(manager, dualCamera, targets = outputTargets) { session ->

  // Create a single request which has one target for each physical camera
  // NOTE: Each target receive frames from only its associated physical camera
  val requestTemplate = CameraDevice.TEMPLATE_PREVIEW
  val captureRequest = session.device.createCaptureRequest(requestTemplate).apply {
    arrayOf(surface1, surface2).forEach { addTarget(it) }
  }.build()

  // Set the sticky request for the session and you are done
  session.setRepeatingRequest(captureRequest, null, null)
}

Java

CameraManager manager = ...;

        // Get the two output targets from the activity / fragment
        Surface surface1 = ...;  // from SurfaceView
        Surface surface2 = ...;  // from SurfaceView

        DualCamera dualCamera = findShortLongCameraPair(manager, null);
                DualCameraOutputs outputTargets = new DualCameraOutputs(
                null, Collections.singletonList(surface1), Collections.singletonList(surface2));

        // Here you open the logical camera, configure the outputs and create a session
        createDualCameraSession(manager, dualCamera, outputTargets, null, (session) -> {
            // Create a single request which has one target for each physical camera
            // NOTE: Each target receive frames from only its associated physical camera
            CaptureRequest.Builder captureRequestBuilder;
            try {
                captureRequestBuilder = session.getDevice().createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
                Arrays.asList(surface1, surface2).forEach(captureRequestBuilder::addTarget);

                // Set the sticky request for the session and you are done
                session.setRepeatingRequest(captureRequestBuilder.build(), null, null);
            } catch (CameraAccessException e) {
                e.printStackTrace();
            }
        });

您只需要提供使用者介面,讓使用者可以在兩個介面之間切換,例如按鈕或輕觸兩下 SurfaceView。您甚至可以執行某種形式的場景分析,並自動在兩個串流之間切換。

鏡頭變形

所有鏡頭都會產生一定程度的扭曲。在 Android 中,您可以使用 CameraCharacteristics.LENS_DISTORTION 查詢鏡頭產生的扭曲,取代現已淘汰的 CameraCharacteristics.LENS_RADIAL_DISTORTION。邏輯相機的扭曲幅度較小,且應用程式可以增加或減少使用來自相機的影格。以實體相機來說,鏡頭設定可能會有很大的差異,尤其是在廣角鏡頭上。

部分裝置可能會透過 CaptureRequest.DISTORTION_CORRECTION_MODE 實作自動變形校正功能。大多數裝置會預設開啟變形校正功能。

Kotlin

val cameraSession: CameraCaptureSession = ...

        // Use still capture template to build the capture request
        val captureRequest = cameraSession.device.createCaptureRequest(
            CameraDevice.TEMPLATE_STILL_CAPTURE
        )

        // Determine if this device supports distortion correction
        val characteristics: CameraCharacteristics = ...
        val supportsDistortionCorrection = characteristics.get(
            CameraCharacteristics.DISTORTION_CORRECTION_AVAILABLE_MODES
        )?.contains(
            CameraMetadata.DISTORTION_CORRECTION_MODE_HIGH_QUALITY
        ) ?: false

        if (supportsDistortionCorrection) {
            captureRequest.set(
                CaptureRequest.DISTORTION_CORRECTION_MODE,
                CameraMetadata.DISTORTION_CORRECTION_MODE_HIGH_QUALITY
            )
        }

        // Add output target, set other capture request parameters...

        // Dispatch the capture request
        cameraSession.capture(captureRequest.build(), ...)

Java

CameraCaptureSession cameraSession = ...;

        // Use still capture template to build the capture request
        CaptureRequest.Builder captureRequestBuilder = null;
        try {
            captureRequestBuilder = cameraSession.getDevice().createCaptureRequest(
                    CameraDevice.TEMPLATE_STILL_CAPTURE
            );
        } catch (CameraAccessException e) {
            e.printStackTrace();
        }

        // Determine if this device supports distortion correction
        CameraCharacteristics characteristics = ...;
        boolean supportsDistortionCorrection = Arrays.stream(
                        characteristics.get(
                                CameraCharacteristics.DISTORTION_CORRECTION_AVAILABLE_MODES
                        ))
                .anyMatch(i -> i == CameraMetadata.DISTORTION_CORRECTION_MODE_HIGH_QUALITY);
        if (supportsDistortionCorrection) {
            captureRequestBuilder.set(
                    CaptureRequest.DISTORTION_CORRECTION_MODE,
                    CameraMetadata.DISTORTION_CORRECTION_MODE_HIGH_QUALITY
            );
        }

        // Add output target, set other capture request parameters...

        // Dispatch the capture request
        cameraSession.capture(captureRequestBuilder.build(), ...);

在此模式下設定擷取要求可能會影響相機產生的畫面更新率。您可以選擇只針對靜態圖片設定變形校正。