So this past Saturday, we had the SFU Summer Festival back at Simon Fraser University, which was held this year in the West Gym due to construction in the usual venue, the Convocation Mall. While the staff and volunteers worked their hardest, making this event a success, I decided to try two things at this new place:
- Making a timelapse, and
- Making photos available on Discord, like the SFU webcam we have.
I’ve never done anything like this before and I’m still learning about cameras, but I thought it would be cool to see how many people were walking about. For people that couldn’t make it to the event, or for those that were in transit and on the way, it would be neat to see what the traffic was like.
This blog post outlines the things that happened two weeks prior to the event. I’ve included some links and stuff below, as well as some things I’ve learned while trying this out. Hopefully you’ll find something interesting along the way. A video of the unmodified timelapse is near the bottom.
Before starting, I brought up the idea to the SFU Anime Club , where we had some members affiliated with the organization of the event. It wasn’t shot down, so I looked for things that could help me achieve this. The first test run I had was using a webcam I had lying around, along with a Raspberry Pi 3 B+. My initial script was in bash: I used a program called fswebcam, which was described the official Raspberry Pi website. It worked and had everything I wanted, including timestamps and a customizable title. It would save images to a folder at set intervals, and I could use the latest one and upload it to Discord. The downside was it would work intermittently, which was frustrating cause that meant skipped frames.
Before addressing the issues I had there, I decided to implement #2. I ended up modifying our existing code to accept parameters and look through a folder of files to get the latest photo, and upload it to Discord, with the assistance of glob. As the number of files grows, this can get slow, but for our intents and purposes, if it can upload within a reasonable time (e.g. within 5 seconds on a good connection), then it would be fine. More on that later.
As I tested this out on the SFU Anime Club Discord server, one member, Taiyaki, suggested I use OpenCV to do the capturing and post-processing. As I was using OpenCV for my capstone project, I had all the relevant dependencies installed on my Pi. With that, I switched from this bash script to writing a Python script. I got it working with that, where it would loop indefinitely and take a webcam capture every 20 seconds, add headers for the timestamp and camera name, and save it to a file. This worked more consistently.
Next, the webcam I used was pretty bad in terms of quality. I ended up borrowing another webcam from a friend, which was in 1080p HD. I swapped it out, and the resolution was better, but still pretty fuzzy. I also decided it would be neat to try using both cameras. I modified the script to make it run asynchronously with two cameras, and to save to a file. It worked out, and, all things considered, it was ready for the day of.
As I continued to test, on the week of the event, many people brought up the quality of cameras again. At this point, I brought out the DSLR I had and tried hooking it up to the Pi. It was a Canon 70D, and I had done remote capturing before using their programs on Windows, but not on GNU/Linux. I dug around, and found this neat application called gphoto2, which was a CLI tool for libgphoto2, that allowed you to remotely control your camera. This was exactly what I needed. I managed to get some test captures working, and then integrated it with my python script. I included all the post-processing stuff, and it still worked. The end quality was so much better. I ended up capturing at 5 megapixels, which was sufficient in terms of storage space and upload speed.
Another thing I needed to consider was the battery life. Although I had the DSLR connected via USB, it doesn’t provide power to it. I left my DSLR out with the script running, and it lasted around 9 hours with continuous capture. The event starts at 1PM and ends at 8PM, so if I’m there at 12PM, and stay until 9PM, this should be enough.
Fast-forward to the day of the event, I ended up setting up 2 cameras: the DSLR and the 1080p webcam. I was up on a balcony overlooking artists below. I am grateful to have a bunch of friends that helped me keep watch on the camera set up at intervals so that it would run, and I could go walk around the place, too. The end result was the following 2 videos. The second one got moved a little in the first few hours, so that can’t be helped
There are some things I learned after doing this from this video:
- I should have set the camera on manual focus after the initial focus, since the camera wasn’t going to move anyways. By not doing so, you will notice subsequent images shifting in and out a little bit from that.
- Throughout the video, you will see images that are brighter and darker. That was my mistake of not setting the shutter speed time. I did fix the aperture to f7.1, but shutter speed definitely changed throughout. This can probably fixed by normalizing the brightness of each photo before.
In the end, this was a great event. I had a blast trying this little project out, and had fun walking around and bumping into so many people. The timelapse still came out decently though, so I’m happy. We had about ~1500 images per camera, and even when using the Discord command to fetch images, it was still under the 5 second constraint we had set out before. Of course this doesn’t scale, but it was sufficient for this little project.
For those interested in the code:
- The timelapse autocapture script is available here on GitHub.
- The Discord command update is available here on GitHub.
Anyways, that’s all I have for now, until next time.
[…] so I had to miss it. That being said, I was already working on a slightly improved version of the timelapse we did back in 2019. So today, I want to talk a little bit about how we hooked this up to Ren, the SFU Anime […]