Skip to main content


The first time you start Stream, a config.ini is created in the directory you have specified. Please note that anytime you change the contents of config.ini, you will need to restart the Docker container. When parameters are defined at the [cameras] level, they apply to all cameras. When they are defined at the [[camera-id]] level, they apply to a single camera.

# List of TZ names on = UTC
[cameras]  # Full list of regions:  # regions = fr, gb
  # Image file name, you can use any format codes from  image_format = $(camera)_screenshots/%y-%m-%d/%H-%M-%S.%f.jpg
  # See the complete list of parameters at  # For example, you may add:   # mmc = true    [[camera-1]]    active = yes    url = rtsp://    name = Camera One
    # Save to CSV file. The corresponding frame is stored as an image in the same directory.    # See more formats on    csv_file = $(camera)_%y-%m-%d.csv

Helpful Tips#

Please read the tips below to get the most out of your Stream deployment.

Use CaseTips
All Use Cases1) Set Region Code to tune the engine for your specific country(s) or state(s).
2) Set mmc = true if your license includes Vehicle Make Model Color.
3) Set detection_mode = vehicle if you want to detect vehicles without a visible plate.
4) Forward results to ParkPow if you need a dashboard with proactive alerts, etc.
ParkingThis includes gated communities, toll, weigh bridge and other slow-traffic use-cases.
1) Set max_prediction_delay= to 1-2 seconds to more quickly read a plate.
2) Set Sample Rate to achieve an effective FPS of 4-5, since vehicle speeds are fairly slow.
3) Apply Detection Zones so Stream ignores certain areas of the camera view.
4) Forward results to Gate Opener if you need to open a gate.
ALPR Inside Moving CarThis includes police surveillance, dashcams, and drones.
1) Turn on Merge Buffer(merge_buffer = 1) so Stream can compensate for “road bumps” and other vibrations.
2) Keep Sample Rate at 1-3(sample =) so that Stream can process most/all of the frames, especially if vehicles are moving fast.
Highway, Street Monitoring1) Apply Detection Zones so Stream ignores certain areas of the camera view.
2) Adjust Sample Rate based on the speed of the vehicles.


All parameters are optional except url.


  1. To run Stream on a RTSP camera feed, just specify the URL to point to the RTSP file. For example:
    url = rtsp://
    url = rtsp://admin:12345@ # where admin is the username and 12345 is the password.
    If processing a stream using the UDP protocol such as VLC, include this option in the Stream run command
    -e OPENCV_FFMPEG_CAPTURE_OPTIONS="rtsp_transport;udp"
  2. For additional help with RTSP, please go to
  3. You can also process video files.


  1. Stream processes all the cameras defined in the config file if active is set to yes. See example above.
  2. Stream will automatically reconnect to the camera stream when there is a disconnection. There is a delay between attempts.


Include one or multiple regions from this list:


  1. Get Vehicle Make, Model and Color (MMC) identification from a global dataset of 9000+ make/models.
  2. Vehicle Orientation refers to Front or Rear of a vehicle.
  3. Direction of Travel is an angle in degrees between 0 and 360 of a Unit Circle. Examples:
    1. 180° = Car going left.
    2. 0° = Car going right.
    3. 90° = Car going upward.
    4. 270° = Car going downward.
  4. The output will be in both CSV file and also via Webhooks.
  5. Please note that the Stream Free Trial does not include Vehicle MMC. To get Vehicle MMC on Stream, subscribe.
  6. If you have a subscription for Vehicle MMC, then add this line: mmc = true


  1. Set the time delay (in seconds) for the vehicle plate prediction.
  2. The default value is 6 seconds. So by default, Stream waits at most 6 seconds before it sends the plate output via Webhooks or saves the plate output in the CSV file.
  3. For Parking Access use-cases, where the camera can see the vehicle approaching the parking gate, you can decrease this parameter to say 3 seconds to speed up the time it takes to open the gate.
  4. In situations where the camera cannot see the license plate very well (say due to some obstruction or because it's a lower res camera), then increasing max_prediction_delay will give Stream a bit more time without rushing to find the best frame for the best ALPR results.


  1. Set the time between when Stream will detect that same vehicle again for a particular camera. This has no effect if a vehicle is seen by multiple cameras.
  2. If this parameter is omitted, Stream will use the default value, which is 300 seconds (5 minutes).
  3. This can be useful in parking access situations where the same vehicle may turn around and approach the same Stream camera.
  4. By flushing out the detection memory, Stream will be able to recognize and decode that same vehicle.
  5. The minimum value would be 0.1 seconds. But, for clarity, you don't want to set it so low because then if the camera still sees that vehicle again (say 0.2 seconds later), then Stream will count that same vehicle again in the ALPR results..


  1. You can set the timezone for the timestamp in the CSV and also Webhooks output.
  2. If you omit this field, then the default timezone output will be UTC.
  3. Please refer to the timezones in
  4. Plate Recognizer automatically adjusts for time changes (e.g. daylight savings, standard time) for each timezone. Examples: a) For Silicon Valley, use timezone = America/Los_Angeles b) For Budapest, use timezone = Europe/Budapest. You can also use timezone = Europe/Berlin.

The timestamp field is the time the vehicle was captured by the camera. We are using the operating system clock and we apply the timezone from config.ini.


  1. Set Stream to skip frames of a particular camera or video file.
  2. By default, sample = 2, so Stream processes every other frame.
  3. Set sample=3 if you want to process every third frame.
  4. This parameter lets you skip frames in situations where you have limited hardware and/or do not need to process all the frames of the video or camera feed.
  5. See section on Optimizing Stream for more info.


When enabled, it only accepts the results that exactly match the templates of the specified region. For example, if the license plate of a region is 3 letters and 3 numbers, the value abc1234 will be discarded. To turn this on add region_config = strict.


There are 2 options. When set to detection_rule = strict (the default) license plates that are detected outside a vehicle will be discarded. To keep the license plates outside a vehicle use detection_rule = normal.


Detection of vehicles without plates can be enabled with detection_mode = vehicle . This output uses a different format. The license plate object is now optional. The props element is also optional and is for the object properties (make model or license plate text).


Improve accuracy when Stream is used with a moving camera (for example, a camera mounted on a car). Set merge_buffer = 1 to turn this on. This setting will increase compute.

Output Formats#


  1. Indicate the filename of the CSV output you’d like to save. In the example above, we’ve called the CSV file as camera-1.csv.
  2. The name can be dynamic. Refer to the field image_format for details. For example: csvfile = $(camera)%y-%m-%d.csv


  1. Save the prediction results to a JSON file. For example:
  • jsonlines_file = my_camera_results.jsonl
  • jsonlinesfile = $(camera)%y-%m-%d.jsonl
  1. We are using the JSON Lines format. Refer to the field image_format for details.


  1. Save images in a particular folder with a specific filename. In the example above, it saves images as one folder per camera and each image is named camera_timestamp
  2. Customize with the following examples:
    • $(camera) is replaced by camera-1
    • If the current date is 2020-06-03 20:54:35. %y-%m-%d/%H-%M-%S.jpg is replaced by 20-06-03/20-54-35.jpg. Letters starting with the percent sign are converted according to those rules
    • To put images from all cameras into the same folder: image_format = screenshots/$(camera)_%y-%m-%d_%H-%M-%S.%f.jpg
  3. If you don't need to save images, you can leave it empty: image_format =

Webhook Parameters#

Stream can automatically forward the results of the license plate detection and the associated vehicle image to an URL. You can use Webhooks as well as save images and CSV files in a folder. By default, no Webhooks are sent.

  • If the target cannot be reached (HTTP errors or timeout), the request is retried 3 times with a 10 seconds interval.
  • If it is still failing, the data is saved to disk. That can be turned off using webhook_caching.
    • When a webhook fails, all new webhook data will directly be saved to disk for a period of 5 minutes. After that webhooks are processed normally.
    • If a new webhook is received and processed successfully, we will also process the data saved to disk if there is any.
    • When webhooks are saved to disk, we remove the oldest data when the free disk space is low.
  • The webhook data uses the timestamp set at capture time.
  • The response contains both the UTC timestamp as well as the local timestamp that reflects the timezone set in Stream.


  • The recognition data and vehicle image are encoded in multipart/form-data.
  • To ensure that your webhook_target endpoint is correct, please test it out at
  • To read the webhook message, you can use our tiny server in Python.

You can send multiple targets by simply listing all the targets. Each target shares the same webhook_image and webhook_image_type property.

webhook_targets =,

To test if the target is working correctly, you can use this command to send an example webhook. The payload contains an image and the matching license plate information. Replace TARGET_URL with your server url.

docker run -e URL=http://TARGET_URL platerecognizer/webhook-tester


  1. This field can be set to either:
    webhook_image = yeswebhook_image = no
  2. When ‘webhook_image = no’ is set, it will send only the decoded plate information but the image data will not be sent to the target URL. This lets your system receive the plate information faster. This is especially useful for Parking Access situations where you need to open an access gate based on the license plate decoded.
  3. The license plate bounding box is calculated based on the original image when that field is no. Else it is calculated based on the image sent in the webhook.


  1. This field can be set to either:
    webhook_image_type = originalwebhook_image_type = vehicle
  2. When set to original, the webhook will send the full-size original image from the camera.
  3. When set to vehicle, the webhook will send only the image contained within the bounding box of each vehicle detected.


If a webhook fails, it is by default cached to disk and retried later. To turn this off, use webhook_caching = no. This option was added in version 1.29.0.


A webhook request will timeout after webhook_request_timeout seconds. The default value is 30 seconds.

Forwarding ALPR to ParkPow (example only)#

  1. To forward ALPR info from Stream over to ParkPow (our ALPR Dashboard and Parking Management solution), please refer to this example below:
    webhook_targets = = Authorization: Token 5e858******3cwebhook_image = yeswebhook_image_type = vehicle
  2. Please note that there’s an addition of the ‘webhook_header’ in sending info to ParkPow via webhooks.

Other Parameters#

Detection Zone#

  1. Detection Zones exclude overlay texts, street signs or other objects. For more info, read our blog on Detection Zones.
  2. To start, go to our Detection Zone in your Plate Rec Account Page.
  3. Make sure that the Stream camera_id set in Detection Zone is the same as the camera_id in your Stream config file.
  4. After you upload an image from that specific Stream camera, you can use the marker to mask the areas you want Plate Recognizer to ignore.
  5. Make sure to restart the Docker container to see the effects. When you open your Stream folder you will now see one file per zone.

To remove a detection zone, click Remove on Detection Zone. Then remove the file zone_camera-id.png from the Stream folder.