Skip to content
Last updated

Video calling

Set up video calls with the Sinch In-app SDK.

Setting up a video call

Just like audio calls, video calls are placed through the SinchCallClient and events are received using SinchCallClientDelegate and SinchCallDelegate. For a more general introduction to calling and SinchCallClient, see Audio calling.

Before you start, ensure your application is requesting user permission for using the video camera.

Showing the video streams

The following examples for showing video streams will be based on the assumption of a view controller having the following properties:

final class VideoCallViewController: UIViewController {

  @IBOutlet private var remoteVideoView: UIView!
  @IBOutlet private var localVideoView: UIView!
}

Showing a preview of the local video stream

The locally captured stream is rendered into the view provided by SinchVideoController.localView when it's attached to the application UI view hierarchy.

override func viewDidLoad() {
  super.viewDidLoad()

  guard let videoController = sinchClient?.videoController else { return }

  localVideoView.addSubview(videoController.localView)
}

Showing remote video streams

Once the remote video stream is available, the delegate method callDidAddVideoTrack(_:) will be called and you can use that to attach the Sinch video controller view (SinchVideoController.remoteView) to your application view hierarchy so that the stream is rendered.

func callDidAddVideoTrack(_ call: SinchCall) {
  guard let videoController = sinchClient?.videoController else { return }

  remoteVideoView.addSubview(videoController.remoteView)
}

(The remote stream will automatically be rendered into the view provided by SinchVideoController.remoteView.)

Pausing and resuming a video stream

To pause the local video stream, use call.pauseVideo(). To resume the local video stream, use call.resumeVideo().

// Pause the video stream
call.pauseVideo()
// Resume the video stream
call.resumeVideo()

The call delegate will be notified of pause and resume events via the delegate callback methods callDidPauseVideoTrack(_:) and callDidResumeVideoTrack(_:). For example, update the UI with a pause indicator and subsequently remove it.

Video content fitting and aspect ratio

How the rendered video stream is fitted into a view can be controlled by UIView.contentMode. Assigning contentMode on a view returned by SinchVideoController.remoteView or SinchVideoController.localView will affect how the video content is laid out. Note that only .scaleAspectFit and .scaleAspectFill will be respected.

Example

guard let videoController = sinchClient?.videoController else { return }

videoController.remoteView.contentMode = .scaleAspectFill

Full screen mode

The Sinch SDK provides helper functions to transition a video view into fullscreen mode. These are provided as extension methods for the UIView class and are defined in UIView+Fullscreen.swift.

Example

@IBAction func toggleFullscreen(_ sender: Any) {
  guard let view = sinchClient?.videoController.remoteView else { return }

  if view.sinIsFullscreen() {
    view.contentMode = .scaleAspectFit
    view.sinDisableFullscreen(true) // Pass true to animate the transition
  } else {
    view.contentMode = .scaleAspectFill
    view.sinEnableFullscreen(true)  // Pass true to animate the transition
  }
}

Camera selection (front/back)

Select the front or back camera using SinchVideoController.captureDevicePosition. To switch cameras programmatically, call captureDevicePosition.toggle().

Example

@IBAction func toggleSwitchCamera(_ sender: Any) {
  guard let videoController = sinchClient?.videoController else { return }

  videoController.captureDevicePosition.toggle()
}

Accessing raw video frames from remote and local streams

The Sinch SDK provides access to the raw video frames of the remote and local video streams. This allows you to process the video frames with your own implementation to achieve rich functionalities, such as applying filters, adding stickers to the video frames, or saving a frame as an image.

Perform custom video frame processing by implementing SinchVideoFrameCallback and register it using setRemoteVideoFrameCallback(_:) and setLocalVideoFrameCallback(_:). The callback handler will provide the frame in the form of a CVPixelBuffer, and a completion handler closure that you must invoke, passing the processed output frame (also as a CVPixelBuffer) as an argument.

Example:

final class FrameProcessor: NSObject, SinchVideoFrameCallback {

  func onFrame(_ pixelBuffer: CVPixelBuffer,
               completionHandler: @escaping (CVPixelBuffer) -> Void) {
    // Dispatch filter operations to a background queue with high priority
    // to prevent blocking the SDK while processing frames
    DispatchQueue.global(qos: .userInteractive).async {
      let sourceImage = CIImage(cvPixelBuffer: pixelBuffer, options: nil)
      let sourceExtent = sourceImage.extent

      let vignetteFilter = CIFilter(name: "CIVignetteEffect")!
      vignetteFilter.setValue(sourceImage, forKey: kCIInputImageKey)
      vignetteFilter.setValue(CIVector(x: sourceExtent.size.width / 2,
                                       y: sourceExtent.size.height / 2),
                              forKey: kCIInputCenterKey)
      vignetteFilter.setValue(sourceExtent.size.width / 2, forKey: kCIInputRadiusKey)
      var filteredImage = vignetteFilter.outputImage ?? sourceImage

      let effectFilter = CIFilter(name: "CIPhotoEffectInstant")!
      effectFilter.setValue(filteredImage, forKey: kCIInputImageKey)
      filteredImage = effectFilter.outputImage ?? filteredImage

      let ciContext = CIContext(options: nil)
      ciContext.render(filteredImage, to: pixelBuffer)

      completionHandler(pixelBuffer)
    }
  }
}

Registration in your VideoCallViewController:

private let frameProcessor = FrameProcessor()

func registerFrameCallbacks() {
  guard let videoController = sinchClient?.videoController else { return }

  videoController.setRemoteVideoFrameCallback(frameProcessor)
  videoController.setLocalVideoFrameCallback(frameProcessor)
}
Notes:
  • It's recommended to perform frame processing asynchronously using GCD with DispatchQueue, and tune the queue priority to your use case. If you are processing each and every frame (example applying a filter), it's recommended to use DispatchQueue.global(qos: .userInteractive). If you are only processing some frames, example saving snapshot frames based on user action, then it may be more appropriate to use DispatchQueue.global(qos: .background).
  • In Swift with ARC, CVPixelBuffer objects are automatically memory managed. Unlike Objective-C, you do not need to call CVPixelBufferRetain() or CVPixelBufferRelease().
  • The approach shown in the example above might provoke a crash on older iOS versions (example iOS 11.x, iOS 12.x) due to a bug in CoreImage (see StackOverflow threads 1 and 2). If your deployment target is lower than iOS 13.0, consider using an image processing library other than CoreImage.

Converting a video frame to UIImage

The Sinch SDK provides the extension for CVPixelBuffer with method sinGetUIImage() to convert a CVPixelBuffer to UIImage.

let image = pixelBuffer.sinGetUIImage()
Important!

If you retain a pixel buffer for asynchronous work, release it after you're done.

Request user permission for using the camera

Recording video always requires explicit permission from the user. Your app must provide a description for its use of video camera in terms of the Info.plist key NSCameraUsageDescription.

See Apple's documentation on AVCaptureDevice.requestAccess(for:completionHandler:) for details on how to request user permission.