Here you will find information on how to successfully connect and configure the camera.
To call up the configuration page of the Perception Box, simply click on the respective box in the list view. This allows you to manage all cameras running on this device.
You are of course completely free to name your BERNARD devices as you wish; we recommend a logical naming scheme at this point.
Click on one of the displayed cameras to open the Settings of the respective camera. You can name the camera there. At the top, you have the option of deactivating the camera stream. If a camera stream is deactivated, it is no longer taken into account by the software. The configuration is retained.
As soon as you have configured the camera connection (Camera Connection), you will see a preview of the camera image. You can now continue with the scenario configuration.
Now that you see your camera images, it's time for the Configuration. This is where the magic happens!
Since our software is mainly intended and used for specific use cases, you will find all the information you need for the perfect setup in the respective sections for your use case:
In the configuration, you can select the model suitable for your use case and configure any combination of event triggers and additional functions.
The model reflects the engine with which the BERNARD devices work.
Below you will find a brief description of each model. To decide which one you want or should use for your application, please read the section on the respective use case:
This model detects vehicles, people on two-wheelers and people in highly dynamic scenes (e.g. road traffic or highways, i.e. scenes in which objects move quickly).
This model detects vehicles, people on two-wheelers and people in scenes with low dynamics, for example parking lots where objects are not moving or are moving slowly. Since this model analyzes the video at a higher resolution, it can detect objects that are further away from the camera. This requires more computing power and is therefore currently only recommended for diesen Anwendungsfall
This model detects vehicles, people on two-wheelers and people in low dynamic scenes in parking lots when a fisheye camera is used. This does not work for fast, dynamic traffic scenes.
This model detects a person, or the entire body of a person. This makes it ideal for detecting, tracking and counting people when they are further away (>5 m) from the camera.
This model detects a person's head and is therefore ideal for detecting, tracking and counting people when they are closer (<6 m) to the camera.
When recognizing and tackling people, we never perform facial recognition. No sensitive personal information of any kind is processed. Contact us at any time if you have any questions about personal data or data protection.
Each event trigger generates a unique ID in the background. You can assign user-defined names so that you can keep track of all configured triggers. This name is then used to display the corresponding data in the corresponding data evaluation tool.
Below you will find explanations of the individual event types and triggers:
We provide various templates for the three different use cases to make configuration and general use as easy as possible for you.
Parking events: Templates for all use cases in the area of parking space monitoring
Traffic events: Templates for use in the area of traffic monitoring and traffic safety
People events: These templates are perfect for use with our People Full Body or People Head models
The description of the available event triggers and the individually customizable event settings can be found in the following table:
**Counting Lines (CL) trigger a count as soon as the center of an object crosses the line ** When configuring a CL, it is important that the camera perspective is taken into account.
The CL also logs the direction in which the object has crossed the line in In and Out. You can switch In and Out at any time to adjust the direction accordingly. In addition, a separate name can be assigned for the two directions
By default, a CL only counts each object once. If every crossing is to be counted, there is the option to activate events for repeated CL crossings. The only restriction here is that counts are only taken into account if there are five seconds in between.
In addition to the repeated CL crossings, ANPR and Speed Estimation are also available as triggers or settings.
Speed Estimation_ can be activated as a special trigger setting for a CL in the left sidebar. This adds another line that you can use to specify the distance in your scenario. For the best results, we recommend a straight line without curves/slopes.
Regions of Interest** count objects in a specific area.** In addition, the class (Class) and dwell time (Dwell Time) are determined and specified.
Depending on the scenario, we can distinguish between three types of RoI. We offer the templates described below for these three types:
These zones are used for origin-destination analyses. Counts are generated when an object moves through OD1 and then OD2. At least two areas must be configured for an OD.
The first zone that the object passes through is called the origin zone (Origin). The last zone that the object passes is therefore referred to as the destination zone (Destination).
Your devices send the results of the real-time analysis to an MQTT broker. The default configuration sends data to the Azure Cloud and to Data Analytics so that it can be retrieved there. If you would like to set up a custom MQTT, please contact our Support Team
Message compression can save up to 50% bandwidth used to send events to the MQTT broker. Please note that the broker and application must support decompression of zlib / inflate.
Single Space Parking | Multi Space Parking | Generic | |
---|---|---|---|
Event- Trigger
Duration (Time)
Duration (Time)
Duration (Time) or Status (Occupancy)
Event- Type
Parking
Parking
People and Traffic
Number of preset Objects
1
5
1
Color
dark green
purple
light green