A team has just received a task to build an application that needs to recognize faces in streaming videos.
They will get the source videos from a third party that uses a container format (MKV).
Click on the arrows to vote for the correct answer
A. B. C. D. E. F.Correct Answer - B, C and F.
For facial recognition in live videos, it is different from that in photos.
Kinesis is required to meet the needs of the real-time process.
Amazon Rekognition Video uses Amazon Kinesis Video Streams to receive and process a video stream.
The analysis results are output from Amazon Rekognition Video to a Kinesis data stream and then read by your client application.
Amazon Rekognition Video provides a stream processor (CreateStreamProcessor) that you can use to start and manage the analysis of streaming video.
As a summary, the below 3 items are needed for Amazon Rekognition Video with streaming video.
A Kinesis video stream for sending streaming video to Amazon Rekognition Video.
For more information, see the Kinesis video stream.
An Amazon Rekognition Video stream processor to manage the analysis of the streaming video.
For more information, see Starting Streaming Video Analysis.
A Kinesis data stream consumer to read the analysis results that Amazon Rekognition Video sends to the Kinesis data stream.
For more information, see Consumers for Amazon Kinesis Streams.
Option A is incorrect because the source videos should be put into the Kinesis video stream instead of S3
Afterwards, the Rekognition processor will pick up records in the Kinesis stream to process.
Option B is CORRECT because it is the step to convert source data into the Kinesis video stream.
Option C is CORRECT.
A stream processor can be created by calling CreateStreamProcessor.
The request parameters include the Amazon Resource Names (ARNs) for the Kinesis video stream, the Kinesis data stream, and the identifier for the collection that's used to recognize faces in the streaming video.
It also includes the name that you specify for the stream processor.
Below is an example:
Option D is incorrect because, for video processing, Rekognition API “DetectFaces” should not be used.
“DetectFaces” is used to detects faces within an image that is provided as input.
Instead, stream processor relevant APIs should be used.
Option E is incorrect because the output from Rekognition should be stored in the Kinesis data stream.
When the Rekognition stream processor is created, the Rekognition output (Kinesis Data Stream) is defined.
"Output": {
"KinesisDataStream": {
"Arn": "arn:aws:kinesis:us-east-1:nnnnnnnnnnnn:stream/outputData"
}
Option F is CORRECT because it describes correctly how to consume the Kinesis data stream.
You can use the Amazon Kinesis Data Streams Client Library to consume analysis results that are sent to the Amazon Kinesis Data Streams output stream.
Details can be found in https://docs.aws.amazon.com/rekognition/latest/dg/streaming-video-kinesis-output.html.
To build an application that can recognize faces in streaming videos, several AWS services can be utilized. Here are the possible options:
A. S3 buckets to store the source MKV videos for AWS Rekognition to process: In this approach, the source videos are stored in S3 buckets, and Rekognition is used to process them. S3 buckets provide unlimited, highly available, and durable storing space, making it a suitable choice for storing videos. However, it is important to ensure that the third party has the write access to the S3 buckets to be able to upload the videos.
B. A Kinesis video stream for sending streaming video to Amazon Rekognition Video: This approach involves using a Kinesis video stream to send streaming video to Amazon Rekognition Video. This can be done by using Kinesis "PutMedia" API in Java SDK. The PutMedia operation writes video data fragments into a Kinesis video stream that Amazon Rekognition Video consumes. This approach is suitable for real-time video processing.
C. An Amazon Rekognition Video stream processor to manage the analysis of the streaming video: This approach involves using an Amazon Rekognition Video stream processor to manage the analysis of the streaming video. The stream processor can be used to start, stop, and manage stream processors according to needs. This approach is suitable for real-time video processing.
D. Use EC2 or Lambda to call Rekognition API "DetectFaces" with the source videos saved in the S3 bucket: This approach involves using EC2 or Lambda to call the Rekognition API "DetectFaces" with the source videos saved in the S3 bucket. For each face detected, the operation returns face details such as a bounding box of the face, a confidence value, and a fixed set of attributes such as facial landmarks, etc. This approach is suitable for batch processing.
E. After the APP has utilized Rekognition API to fetch the recognized faces from live videos, use S3 or RDS database to store the output from Rekognition: This approach involves storing the output from Rekognition in either S3 or RDS database. Another Lambda can be used to post-process the result and present it to UI.
F. A Kinesis data stream consumer to read the analysis results that Amazon Rekognition Video sends to the Kinesis data stream: This approach involves using a Kinesis data stream consumer to read the analysis results that Amazon Rekognition Video sends to the Kinesis data stream. The consumer can be autoscaled by running it on multiple EC2 instances under an Auto Scaling group.
Overall, the best approach depends on the specific needs of the application. If real-time video processing is required, options B and C are suitable. If batch processing is sufficient, option D can be used. Options A and E are suitable for storing the source videos and output, respectively. Option F can be used to read the analysis results in a scalable manner.