Raspberry Pi Camera: Difference between revisions

From wikiluntti
 
(41 intermediate revisions by the same user not shown)
Line 12: Line 12:


Some settings:
Some settings:
* <code>--qt-preview</code> allow ssh and hardware accelerated preview window  remotely using X-forwarding.
* <code>--timeout 0</code>
* <code>--timeout 0</code>
* <code>--shutter 100000</code> takes a 100 sec photo  
* <code>--shutter 100000</code> takes a 100 sec photo  
* <code>-r</code>
* <code>-r</code>
* --encoding png
* --encoding png
* <code>-n 1</code>
* <code> --width 4056 --height 3040 </code>
* <code>-o path/to/file.jpg</code>
* <code>-o path/to/file.jpg</code>
* <code>--gain 1 --awbgains 1,1 --immediate</code>  Automatic Exposure/Gain Control (AEC/AGC) and Auto White Balance (AWB).  Skip the preview phase entirely with the immediate option
* <code>--gain 1 --awbgains 1,1 --immediate</code>  Automatic Exposure/Gain Control (AEC/AGC) and Auto White Balance (AWB).  Skip the preview phase entirely with the immediate option
=== White Balance ===
<code>libcamera-still --awb custom --awbgains 0.9,2.0 </code>
In <code>--awbgains 0.9,2.0</code>, we have <code>--awbgains red,blue</code> 
* 0.9 is the gain for the red channel and
* 2.0 is the gain for the blue channel.
In other words, if your image is:
* too red -> decrease the first number
* too blue -> decrease the second number
* too yellow -> increase the second number (blue)
* too green -> increase both numbers
* too pink ->


== RPi camera rev 1.3 ==
== RPi camera rev 1.3 ==
Line 29: Line 47:


== RPi Cam ver 2 ==
== RPi Cam ver 2 ==
=== Specs ===


* Size ~ 25 × 24 × 9 mm
* Size ~ 25 × 24 × 9 mm
Line 38: Line 58:
* H Fov 62.2 deg, v Fov 48.8
* H Fov 62.2 deg, v Fov 48.8


Darwings with measures: https://datasheets.raspberrypi.com/camera/camera-module-2-mechanical-drawing.pdf
Fixed focus module. Adding a +2D lens allows you to focus at about 25cm from the “target”. https://raspi.tv/2013/adapt-your-raspberry-pi-camera-for-close-up-use
 
Drawings with measures: https://datasheets.raspberrypi.com/camera/camera-module-2-mechanical-drawing.pdf


M12 lenses recommendation: https://www.gaojiaoptotech.com
M12 lenses recommendation: https://www.gaojiaoptotech.com
=== Macro photos with V2 ===
A V2 camera and screw out the lens until it focuses at 10cm.
=== M12 lenses ===
With any M12 mount, you do have to remove the tiny factory-stock lens from the sensor block first.
Hold https://www.thingiverse.com/thing:3347061
=== Focusing options ===
Before using these tools, you should take a hobby knife and carefully remove the glue locations so that the lens will rotate freely. You don't need to remove the glue at all: a slight twist of the tool, and the glue gives way with a little snap sound.
Tools for changing the lens (to change the focus distance):
* Key for adjusting lens of RaspiCam V2.1  https://www.thingiverse.com/thing:1941802
* A simple wrench to loosen the screw lens without damaging the plastic. Use some pliers to hold the camera unit steady so that it doesn't come off the base. https://www.thingiverse.com/thing:1570865
* Focus ring to manual focusing https://www.thingiverse.com/thing:211641
* Objective adapter (microscope objective lens (reversed)) https://www.thingiverse.com/thing:1565909
*


=== Spectroscope ===
=== Spectroscope ===
Line 48: Line 90:


== HQ Cameras ==
== HQ Cameras ==
=== a ===
=== b ===


=== Microscopes ===
=== Microscopes ===
Some microscopes


* Pimoroni Lens
* Pimoroni Lens
* IBM + LEGO: https://spectrum.ieee.org/build-a-sophisticated-microscope-using-lego-3d-printing-arduinos-and-a-raspberry-pi
* MicroscoPy IBM + LEGO: https://github.com/IBM/MicroscoPy https://spectrum.ieee.org/build-a-sophisticated-microscope-using-lego-3d-printing-arduinos-and-a-raspberry-pi
* M4ALL Open source https://github.com/NanoBioPhotonics-Strathclyde/M4All
* M4ALL Open source https://github.com/NanoBioPhotonics-Strathclyde/M4All
* Picroscope https://www.instructables.com/Picroscope-a-Low-Cost-Interactive-Microscope/
* Picroscope https://www.instructables.com/Picroscope-a-Low-Cost-Interactive-Microscope/
* MicroscpPi https://microscopiproject.wordpress.com/
* Soldering microscope https://www.instructables.com/Raspberry-Pi-Zero-HDMIWiFi-Soldering-Microscope/
* OpenFlexture  https://openflexure.org/projects/microscope/ https://openflexure.org/
* gjcroft's https://github.com/gjcroft/microscope
* PiAutoStage to move https://www.raspberrypi.com/news/piautostage-a-universally-adaptable-microscope-stage/
* Pi4 Microscope https://www.waveshare.com/wiki/Pi4_Microscope_Kit
* Pi Microscope https://hackaday.io/project/167996-pi-microscope#menu-description
* Core Electronics Microscope https://core-electronics.com.au/projects/raspberry-pi-microscope/
* RPiScope https://www.instructables.com/ARPM-Another-raspberry-pi-microscope-made-from-Ple/
* Microscope-PiCam https://github.com/henkrijneveld/Microscope-PiCam
* pi-macroscope https://github.com/leoscholl/pi-macroscope
=== Macro photography ===
Reproduction ratio: The ratio of the subject size on the sensor plane to the actual subject size.
* Auxialiary [https://en.wikipedia.org/wiki/Close-up_lens close-up lens].
* Extension to get it closer
* Wide-angle lens used as a reversed lens in front of a macro lens https://forums.raspberrypi.com/viewtopic.php?f=43&t=276084
*
*
=== d ===
== SSH and Python ==
=== Virtual Environment ===
Note! To use a virtual environment with a PiCam2, the easiest way to use picamera2 in a virtual environment is to use system-site-packages (see https://forums.raspberrypi.com/viewtopic.php?t=361758)
<code>
python3 -m venv --system-site-packages env
</code>
=== Take a video ===
The following code
<code> rpicam-still --qt-preview --timeout 0</code>
works and gives a small window over the ssh port.
=== Python & Matplotlib ===
Matplotlib works, and thus eg <code>plt.show()</code> works charmly even with YOLO.
=== Python & QT ===
Using Pillow or cv2 makes life harder; <code>v2.imshow("Camera", img)</code> gives an error message
<code>
[ WARN:0@0.095] global cap_gstreamer.cpp:1173 isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
</code>
Thus, we might want to use GStreamer or QT, something. A great tutorial about QT can be found at [https://gist.github.com/docPhil99/ca4da12c9d6f29b9cea137b617c7b8b1 How to display opencv video in pyqt apps] .
=== Python, QTPreview and PiCam2 ===
<syntaxhighlight lang="python">
import time
from picamera2 import Picamera2
picam2 = Picamera2()
picam2.start(show_preview=True)
time.sleep(5)
</syntaxhighlight>
works or, similarly defining to use QT
<syntaxhighlight lang="python">
from picamera2 import Picamera2, Preview
from time import sleep
from libcamera import Transform
picam2 = Picamera2()
picam2.start_preview(Preview.QT, transform=Transform(hflip=True, vflip=True))
picam2.start()
sleep(2)
picam2.close() 
</syntaxhighlight>
According to the manual (https://datasheets.raspberrypi.com/camera/picamera2-manual.pdf) ''The QtGL preview window is not recommended when the image needs to be shown on a remote display (not connected to the Pi).'' Also the DRM driver didn't work out of the box, as is indicated in the manual. See chapter 3.4 Remote preview windows.
=== Extracting Frames ===
<syntaxhighlight lang="python">
import cv2
import numpy as np
from picamera2 import Picamera2
height = 480
width = 640
middle =((width//2),(height//2))
cam = Picamera2()
cam.configure(cam.create_video_configuration(main={"format": 'XRGB8888',
                                                          "size": (width, height)}))
cam.start()
while True:
    frame = cam.capture_array()
    cv2.circle(frame, middle, 10, (255, 0 , 255), -1)
    cv2.imshow('frame', frame)
    cv2.waitKey(1)
</syntaxhighlight>
=== Frames & Yolo ===
The following code recognizes different things, but the preprocessing time is about 300 ms. Thus it is a bit slow for most applications.
<syntaxhighlight lang="python" line>
import cv2
from picamera2 import Picamera2
from ultralytics import YOLO
# Initialize the Picamera2
picam2 = Picamera2()
picam2.preview_configuration.main.size = (1280, 720)
picam2.preview_configuration.main.format = "RGB888"
picam2.preview_configuration.align()
picam2.configure("preview")
picam2.start()
# Load the YOLO11 model
model = YOLO("yolo11n.pt")
while True:
    # Capture frame-by-frame
    frame = picam2.capture_array()
    # Run YOLO11 inference on the frame
    results = model(frame)
    # Visualize the results on the frame
    annotated_frame = results[0].plot()
    # Display the resulting frame
    cv2.imshow("Camera", annotated_frame)
    # Break the loop if 'q' is pressed
    if cv2.waitKey(1) == ord("q"):
        break
</syntaxhighlight>
=== 1  ===
=== 2  ===
=== 3  ===


== I ==
== I ==

Latest revision as of 15:12, 12 November 2025

Introduction

Rpicam is based on libcamera rpicam-still

To a raw jpeg file rpicam-still -r -o path/to/file.jpg

Some settings:

  • --qt-preview allow ssh and hardware accelerated preview window remotely using X-forwarding.
  • --timeout 0
  • --shutter 100000 takes a 100 sec photo
  • -r
  • --encoding png
  • -n 1
  • --width 4056 --height 3040
  • -o path/to/file.jpg
  • --gain 1 --awbgains 1,1 --immediate Automatic Exposure/Gain Control (AEC/AGC) and Auto White Balance (AWB). Skip the preview phase entirely with the immediate option

White Balance

libcamera-still --awb custom --awbgains 0.9,2.0

In --awbgains 0.9,2.0, we have --awbgains red,blue

  • 0.9 is the gain for the red channel and
  • 2.0 is the gain for the blue channel.

In other words, if your image is:

  • too red -> decrease the first number
  • too blue -> decrease the second number
  • too yellow -> increase the second number (blue)
  • too green -> increase both numbers
  • too pink ->

RPi camera rev 1.3

Focal Length 3.60 mm.

The v1 camera is based on the Omnivision OV5647.

RPi Cam ver 2

Specs

  • Size ~ 25 × 24 × 9 mm
  • 8 MP, 1080p47, 1640 × 1232p41 and 640 × 480p206 / 3280 × 2464 pixels
  • Sony IMX219
  • Sensor area: 3.68 × 2.76 mm (4.6 mm diagonal); Pixel size 1.12 µm × 1.12 µm
  • Depth of field: Approx 10 cm to ∞
  • f = 3.04 mm
  • H Fov 62.2 deg, v Fov 48.8

Fixed focus module. Adding a +2D lens allows you to focus at about 25cm from the “target”. https://raspi.tv/2013/adapt-your-raspberry-pi-camera-for-close-up-use

Drawings with measures: https://datasheets.raspberrypi.com/camera/camera-module-2-mechanical-drawing.pdf

M12 lenses recommendation: https://www.gaojiaoptotech.com

Macro photos with V2

A V2 camera and screw out the lens until it focuses at 10cm.

M12 lenses

With any M12 mount, you do have to remove the tiny factory-stock lens from the sensor block first.

Hold https://www.thingiverse.com/thing:3347061

Focusing options

Before using these tools, you should take a hobby knife and carefully remove the glue locations so that the lens will rotate freely. You don't need to remove the glue at all: a slight twist of the tool, and the glue gives way with a little snap sound.

Tools for changing the lens (to change the focus distance):

Spectroscope

https://github.com/leswright1977/PySpectrometer2?tab=readme-ov-file

  • A CCTV Lens with Zoom (M12 Thread) (Search eBay for F1.6 zoom lens)

HQ Cameras

a

b

Microscopes

Some microscopes

Macro photography

Reproduction ratio: The ratio of the subject size on the sensor plane to the actual subject size.

d

SSH and Python

Virtual Environment

Note! To use a virtual environment with a PiCam2, the easiest way to use picamera2 in a virtual environment is to use system-site-packages (see https://forums.raspberrypi.com/viewtopic.php?t=361758)

python3 -m venv --system-site-packages env

Take a video

The following code

rpicam-still --qt-preview --timeout 0

works and gives a small window over the ssh port.

Python & Matplotlib

Matplotlib works, and thus eg plt.show() works charmly even with YOLO.

Python & QT

Using Pillow or cv2 makes life harder; v2.imshow("Camera", img) gives an error message

[ WARN:0@0.095] global cap_gstreamer.cpp:1173 isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created

Thus, we might want to use GStreamer or QT, something. A great tutorial about QT can be found at How to display opencv video in pyqt apps .

Python, QTPreview and PiCam2

import time
from picamera2 import Picamera2
picam2 = Picamera2()

picam2.start(show_preview=True)
time.sleep(5)

works or, similarly defining to use QT

from picamera2 import Picamera2, Preview
from time import sleep
from libcamera import Transform

picam2 = Picamera2()
picam2.start_preview(Preview.QT, transform=Transform(hflip=True, vflip=True))
picam2.start()
sleep(2)
picam2.close()

According to the manual (https://datasheets.raspberrypi.com/camera/picamera2-manual.pdf) The QtGL preview window is not recommended when the image needs to be shown on a remote display (not connected to the Pi). Also the DRM driver didn't work out of the box, as is indicated in the manual. See chapter 3.4 Remote preview windows.

Extracting Frames

import cv2
import numpy as np
from picamera2 import Picamera2

height = 480
width = 640
middle =((width//2),(height//2))

cam = Picamera2()
cam.configure(cam.create_video_configuration(main={"format": 'XRGB8888',
                                                           "size": (width, height)}))
cam.start()

while True:
    frame = cam.capture_array()
    cv2.circle(frame, middle, 10, (255, 0 , 255), -1)
    cv2.imshow('frame', frame)
    cv2.waitKey(1)


Frames & Yolo

The following code recognizes different things, but the preprocessing time is about 300 ms. Thus it is a bit slow for most applications.

import cv2
from picamera2 import Picamera2
from ultralytics import YOLO

# Initialize the Picamera2
picam2 = Picamera2()
picam2.preview_configuration.main.size = (1280, 720)
picam2.preview_configuration.main.format = "RGB888"
picam2.preview_configuration.align()
picam2.configure("preview")
picam2.start()

# Load the YOLO11 model
model = YOLO("yolo11n.pt")

while True:
    # Capture frame-by-frame
    frame = picam2.capture_array()

    # Run YOLO11 inference on the frame
    results = model(frame)

    # Visualize the results on the frame
    annotated_frame = results[0].plot()

    # Display the resulting frame
    cv2.imshow("Camera", annotated_frame)

    # Break the loop if 'q' is pressed
    if cv2.waitKey(1) == ord("q"):
        break

1

2

3

I