Revisiting Autofocus for Smartphone Cameras

Abdullah Abuolaim, Abhijith Punnappurath, and Michael S. Brown

Department of Electrical Engineering and Computer Science

Lassonde School of Engineering, York University, Canada

{abuolaim, pabhijith, mbrown}@eecs.yorku.ca

Links to content

Videos captured by different smartphones

The videos shown here correspond to the text in Sec 3.3 of the main paper. To view a video, first select the scene, then click on a button to play the video from that camera for the selected scene. This functionality requires JavaScript to be enabled.


Click on smartphone name to play video below:

Back to links to content

Output videos generated using our AF platform and dataset

The videos shown here correspond to the text in Sec. 4.3 and Sec. 5 of the main paper. These videos are generated by our AF platform and used in our user study. To view a video, first select the scene, then click on a button to play the video from that camera for the selected scene. This functionality requires JavaScript to be enabled.

As discussed in Sec. 5, for Scene 6, we omitted the out of focus objective from the user study because there is no lens position that makes all scene elements out of focus. This video is provided here, however, it was not shown as part of the user study.

Click on objective to play video below:

Back to links to content

Video of our data browser

This video is related to Sec. 3.3 in the main paper that discusses the capture of our 4D temporal image sequence dataset. In particular, this video is a real-time screen captured of our AF platform that allows a captured image sequence to be located (denoted as a scene) and browsed. The user can dynamically change the time point and lens position.

Back to links to content

Video of our platform running

API calls

This PDF provides more details to our API calls provided for our AF platform. This is related to Sec 4.2 in the main paper.
(Click to open API pdf in new window.).

Back to links to content

View Python code

This HTML page shows Python code that is used to communicate with the AF platform API in order to generate an output video of scene 4 using the ''face region'' objective. The inclusion of this Python example in the supplemental materials was discussed in Sec. 4.3. Similar code is used to generate the other output videos.
Click here to view Python code..

Back to links to content

View output script

This text file shows the output script that was captured during the execution of our AF platform. This script was generated from the Python code shown above. The inclusion of an output script in the supplemental materials was discussed in Sec. 4.3.
Click here to view output script..

Back to links to content

Animated snapshots of our user study GUI

As discussed in Sec. 5 of the main paper, we have included a snapshot of our user study GUI. The users only need to provide their gender and age once. Each time they click "play video" the random video is played using VLC.

User-study GUI

Back to links to content

Example output video frames

As discussed in Sec. 5 (Fig. 8), this image shows an additional example of output video frames generated by our AF platform using different objectives applied on Scene 10 over time.

Example output video frames

Back to links to content

Download supplemental materials

Below is the supplemental materials submitted to ECCV 2018. By clicking on the name or icon, you will be directed to another link where you can download the .rar folder.

winrar_icon

[Supplemental materials]

Back to links to content

This page contains files that could be protected by copyright. They are provided here for reasonable academic fair use.
Copyright © 2018